I0330 21:06:49.369068 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0330 21:06:49.369277 6 e2e.go:109] Starting e2e run "23a40af6-e42f-4314-9a7f-91a6516d1b41" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1585602408 - Will randomize all specs Will run 278 of 4843 specs Mar 30 21:06:49.421: INFO: >>> kubeConfig: /root/.kube/config Mar 30 21:06:49.428: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Mar 30 21:06:49.444: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Mar 30 21:06:49.479: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Mar 30 21:06:49.479: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Mar 30 21:06:49.479: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Mar 30 21:06:49.489: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Mar 30 21:06:49.489: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Mar 30 21:06:49.489: INFO: e2e test version: v1.17.3 Mar 30 21:06:49.490: INFO: kube-apiserver version: v1.17.2 Mar 30 21:06:49.490: INFO: >>> kubeConfig: /root/.kube/config Mar 30 21:06:49.496: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:06:49.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi Mar 30 21:06:49.554: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 30 21:06:49.557: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:07:02.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8144" for this suite. • [SLOW TEST:13.256 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":1,"skipped":41,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:07:02.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:07:06.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6761" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":99,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:07:06.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-7d93d91d-99cd-442d-bd3d-e4ff91abdf2e STEP: Creating a pod to test consume secrets Mar 30 21:07:06.982: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f9317d27-4e48-4032-9b95-72eaf1edd6ee" in namespace "projected-5379" to be "success or failure" Mar 30 21:07:06.986: INFO: Pod "pod-projected-secrets-f9317d27-4e48-4032-9b95-72eaf1edd6ee": Phase="Pending", Reason="", readiness=false. Elapsed: 3.930255ms Mar 30 21:07:08.991: INFO: Pod "pod-projected-secrets-f9317d27-4e48-4032-9b95-72eaf1edd6ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008142847s Mar 30 21:07:10.995: INFO: Pod "pod-projected-secrets-f9317d27-4e48-4032-9b95-72eaf1edd6ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012512085s STEP: Saw pod success Mar 30 21:07:10.995: INFO: Pod "pod-projected-secrets-f9317d27-4e48-4032-9b95-72eaf1edd6ee" satisfied condition "success or failure" Mar 30 21:07:10.998: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-f9317d27-4e48-4032-9b95-72eaf1edd6ee container projected-secret-volume-test: STEP: delete the pod Mar 30 21:07:11.031: INFO: Waiting for pod pod-projected-secrets-f9317d27-4e48-4032-9b95-72eaf1edd6ee to disappear Mar 30 21:07:11.067: INFO: Pod pod-projected-secrets-f9317d27-4e48-4032-9b95-72eaf1edd6ee no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:07:11.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5379" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":106,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:07:11.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-f04846c3-e580-4de7-81de-80d73220eb68 STEP: Creating a pod to test consume configMaps Mar 30 21:07:11.132: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5f0983b2-2a43-418d-bf54-91aedb968961" in namespace "projected-4114" to be "success or failure" Mar 30 21:07:11.136: INFO: Pod "pod-projected-configmaps-5f0983b2-2a43-418d-bf54-91aedb968961": Phase="Pending", Reason="", readiness=false. Elapsed: 3.786805ms Mar 30 21:07:13.140: INFO: Pod "pod-projected-configmaps-5f0983b2-2a43-418d-bf54-91aedb968961": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007660225s Mar 30 21:07:15.144: INFO: Pod "pod-projected-configmaps-5f0983b2-2a43-418d-bf54-91aedb968961": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011469956s STEP: Saw pod success Mar 30 21:07:15.144: INFO: Pod "pod-projected-configmaps-5f0983b2-2a43-418d-bf54-91aedb968961" satisfied condition "success or failure" Mar 30 21:07:15.147: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-5f0983b2-2a43-418d-bf54-91aedb968961 container projected-configmap-volume-test: STEP: delete the pod Mar 30 21:07:15.201: INFO: Waiting for pod pod-projected-configmaps-5f0983b2-2a43-418d-bf54-91aedb968961 to disappear Mar 30 21:07:15.239: INFO: Pod pod-projected-configmaps-5f0983b2-2a43-418d-bf54-91aedb968961 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:07:15.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4114" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":114,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:07:15.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-6068ad1d-d2da-44c1-8a6f-abb363b72dc5 STEP: Creating a pod to test consume configMaps Mar 30 21:07:15.317: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b2581c00-c4bb-4725-90c2-c44ec7102bf8" in namespace "projected-4634" to be "success or failure" Mar 30 21:07:15.321: INFO: Pod "pod-projected-configmaps-b2581c00-c4bb-4725-90c2-c44ec7102bf8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017514ms Mar 30 21:07:17.335: INFO: Pod "pod-projected-configmaps-b2581c00-c4bb-4725-90c2-c44ec7102bf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017628068s Mar 30 21:07:19.340: INFO: Pod "pod-projected-configmaps-b2581c00-c4bb-4725-90c2-c44ec7102bf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022098689s STEP: Saw pod success Mar 30 21:07:19.340: INFO: Pod "pod-projected-configmaps-b2581c00-c4bb-4725-90c2-c44ec7102bf8" satisfied condition "success or failure" Mar 30 21:07:19.343: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-b2581c00-c4bb-4725-90c2-c44ec7102bf8 container projected-configmap-volume-test: STEP: delete the pod Mar 30 21:07:19.365: INFO: Waiting for pod pod-projected-configmaps-b2581c00-c4bb-4725-90c2-c44ec7102bf8 to disappear Mar 30 21:07:19.393: INFO: Pod pod-projected-configmaps-b2581c00-c4bb-4725-90c2-c44ec7102bf8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:07:19.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4634" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":124,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:07:19.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:07:36.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3064" for this suite. • [SLOW TEST:17.113 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":6,"skipped":205,"failed":0} SSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:07:36.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8021 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8021;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8021 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8021;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8021.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8021.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8021.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8021.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8021.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8021.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8021.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8021.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8021.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8021.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8021.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8021.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8021.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 144.190.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.190.144_udp@PTR;check="$$(dig +tcp +noall +answer +search 144.190.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.190.144_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8021 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8021;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8021 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8021;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8021.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8021.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8021.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8021.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8021.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8021.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8021.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8021.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8021.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8021.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8021.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8021.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8021.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 144.190.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.190.144_udp@PTR;check="$$(dig +tcp +noall +answer +search 144.190.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.190.144_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 30 21:07:42.710: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:42.714: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:42.717: INFO: Unable to read wheezy_udp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:42.721: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:42.724: INFO: Unable to read wheezy_udp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:42.728: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:42.732: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:42.735: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:42.755: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:42.758: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:42.761: INFO: Unable to read jessie_udp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:42.764: INFO: Unable to read jessie_tcp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:42.767: INFO: Unable to read jessie_udp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:42.771: INFO: Unable to read jessie_tcp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:42.774: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:42.777: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:42.809: INFO: Lookups using dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8021 wheezy_tcp@dns-test-service.dns-8021 wheezy_udp@dns-test-service.dns-8021.svc wheezy_tcp@dns-test-service.dns-8021.svc wheezy_udp@_http._tcp.dns-test-service.dns-8021.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8021.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8021 jessie_tcp@dns-test-service.dns-8021 jessie_udp@dns-test-service.dns-8021.svc jessie_tcp@dns-test-service.dns-8021.svc jessie_udp@_http._tcp.dns-test-service.dns-8021.svc jessie_tcp@_http._tcp.dns-test-service.dns-8021.svc] Mar 30 21:07:47.814: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:47.817: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:47.820: INFO: Unable to read wheezy_udp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:47.823: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:47.826: INFO: Unable to read wheezy_udp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:47.828: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:47.831: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:47.836: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:47.858: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:47.860: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:47.862: INFO: Unable to read jessie_udp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:47.864: INFO: Unable to read jessie_tcp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:47.866: INFO: Unable to read jessie_udp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:47.868: INFO: Unable to read jessie_tcp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:47.870: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:47.873: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:47.889: INFO: Lookups using dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8021 wheezy_tcp@dns-test-service.dns-8021 wheezy_udp@dns-test-service.dns-8021.svc wheezy_tcp@dns-test-service.dns-8021.svc wheezy_udp@_http._tcp.dns-test-service.dns-8021.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8021.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8021 jessie_tcp@dns-test-service.dns-8021 jessie_udp@dns-test-service.dns-8021.svc jessie_tcp@dns-test-service.dns-8021.svc jessie_udp@_http._tcp.dns-test-service.dns-8021.svc jessie_tcp@_http._tcp.dns-test-service.dns-8021.svc] Mar 30 21:07:52.813: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:52.816: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:52.819: INFO: Unable to read wheezy_udp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:52.822: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:52.824: INFO: Unable to read wheezy_udp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:52.827: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:52.830: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:52.833: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:52.855: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:52.857: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:52.860: INFO: Unable to read jessie_udp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:52.863: INFO: Unable to read jessie_tcp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:52.866: INFO: Unable to read jessie_udp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:52.869: INFO: Unable to read jessie_tcp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:52.871: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:52.874: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:52.900: INFO: Lookups using dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8021 wheezy_tcp@dns-test-service.dns-8021 wheezy_udp@dns-test-service.dns-8021.svc wheezy_tcp@dns-test-service.dns-8021.svc wheezy_udp@_http._tcp.dns-test-service.dns-8021.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8021.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8021 jessie_tcp@dns-test-service.dns-8021 jessie_udp@dns-test-service.dns-8021.svc jessie_tcp@dns-test-service.dns-8021.svc jessie_udp@_http._tcp.dns-test-service.dns-8021.svc jessie_tcp@_http._tcp.dns-test-service.dns-8021.svc] Mar 30 21:07:57.814: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:57.817: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:57.820: INFO: Unable to read wheezy_udp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:57.823: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:57.826: INFO: Unable to read wheezy_udp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:57.829: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:57.832: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:57.835: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:57.859: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:57.863: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:57.866: INFO: Unable to read jessie_udp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:57.868: INFO: Unable to read jessie_tcp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:57.871: INFO: Unable to read jessie_udp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:57.873: INFO: Unable to read jessie_tcp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:57.875: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:57.878: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:07:57.895: INFO: Lookups using dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8021 wheezy_tcp@dns-test-service.dns-8021 wheezy_udp@dns-test-service.dns-8021.svc wheezy_tcp@dns-test-service.dns-8021.svc wheezy_udp@_http._tcp.dns-test-service.dns-8021.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8021.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8021 jessie_tcp@dns-test-service.dns-8021 jessie_udp@dns-test-service.dns-8021.svc jessie_tcp@dns-test-service.dns-8021.svc jessie_udp@_http._tcp.dns-test-service.dns-8021.svc jessie_tcp@_http._tcp.dns-test-service.dns-8021.svc] Mar 30 21:08:02.813: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:02.816: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:02.820: INFO: Unable to read wheezy_udp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:02.823: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:02.827: INFO: Unable to read wheezy_udp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:02.830: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:02.833: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:02.836: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:02.858: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:02.861: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:02.864: INFO: Unable to read jessie_udp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:02.867: INFO: Unable to read jessie_tcp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:02.870: INFO: Unable to read jessie_udp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:02.873: INFO: Unable to read jessie_tcp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:02.893: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:02.896: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:02.917: INFO: Lookups using dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8021 wheezy_tcp@dns-test-service.dns-8021 wheezy_udp@dns-test-service.dns-8021.svc wheezy_tcp@dns-test-service.dns-8021.svc wheezy_udp@_http._tcp.dns-test-service.dns-8021.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8021.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8021 jessie_tcp@dns-test-service.dns-8021 jessie_udp@dns-test-service.dns-8021.svc jessie_tcp@dns-test-service.dns-8021.svc jessie_udp@_http._tcp.dns-test-service.dns-8021.svc jessie_tcp@_http._tcp.dns-test-service.dns-8021.svc] Mar 30 21:08:07.813: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:07.816: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:07.819: INFO: Unable to read wheezy_udp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:07.821: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:07.824: INFO: Unable to read wheezy_udp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:07.827: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:07.829: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:07.832: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:07.850: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:07.852: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:07.855: INFO: Unable to read jessie_udp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:07.857: INFO: Unable to read jessie_tcp@dns-test-service.dns-8021 from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:07.860: INFO: Unable to read jessie_udp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:07.862: INFO: Unable to read jessie_tcp@dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:07.865: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:07.871: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8021.svc from pod dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a: the server could not find the requested resource (get pods dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a) Mar 30 21:08:07.887: INFO: Lookups using dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8021 wheezy_tcp@dns-test-service.dns-8021 wheezy_udp@dns-test-service.dns-8021.svc wheezy_tcp@dns-test-service.dns-8021.svc wheezy_udp@_http._tcp.dns-test-service.dns-8021.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8021.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8021 jessie_tcp@dns-test-service.dns-8021 jessie_udp@dns-test-service.dns-8021.svc jessie_tcp@dns-test-service.dns-8021.svc jessie_udp@_http._tcp.dns-test-service.dns-8021.svc jessie_tcp@_http._tcp.dns-test-service.dns-8021.svc] Mar 30 21:08:12.916: INFO: DNS probes using dns-8021/dns-test-fd28782c-0582-46d9-80d9-3811b4ed593a succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:08:13.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8021" for this suite. • [SLOW TEST:36.936 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":7,"skipped":210,"failed":0} SSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:08:13.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-3fff9e93-5338-40bf-a275-45c89633ca16 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:08:13.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6796" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":8,"skipped":214,"failed":0} ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:08:13.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:09:13.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3791" for this suite. • [SLOW TEST:60.092 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":214,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:09:13.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9753.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9753.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9753.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9753.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 30 21:09:19.829: INFO: DNS probes using dns-test-87452959-4df2-492d-be21-2c468dfd51ea succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9753.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-9753.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9753.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-9753.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 30 21:09:25.962: INFO: File wheezy_udp@dns-test-service-3.dns-9753.svc.cluster.local from pod dns-9753/dns-test-e5b79520-0ca7-4c10-a5e3-5fb707237f35 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 21:09:25.965: INFO: File jessie_udp@dns-test-service-3.dns-9753.svc.cluster.local from pod dns-9753/dns-test-e5b79520-0ca7-4c10-a5e3-5fb707237f35 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 21:09:25.965: INFO: Lookups using dns-9753/dns-test-e5b79520-0ca7-4c10-a5e3-5fb707237f35 failed for: [wheezy_udp@dns-test-service-3.dns-9753.svc.cluster.local jessie_udp@dns-test-service-3.dns-9753.svc.cluster.local] Mar 30 21:09:30.969: INFO: File wheezy_udp@dns-test-service-3.dns-9753.svc.cluster.local from pod dns-9753/dns-test-e5b79520-0ca7-4c10-a5e3-5fb707237f35 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 21:09:30.972: INFO: File jessie_udp@dns-test-service-3.dns-9753.svc.cluster.local from pod dns-9753/dns-test-e5b79520-0ca7-4c10-a5e3-5fb707237f35 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 21:09:30.972: INFO: Lookups using dns-9753/dns-test-e5b79520-0ca7-4c10-a5e3-5fb707237f35 failed for: [wheezy_udp@dns-test-service-3.dns-9753.svc.cluster.local jessie_udp@dns-test-service-3.dns-9753.svc.cluster.local] Mar 30 21:09:35.969: INFO: File wheezy_udp@dns-test-service-3.dns-9753.svc.cluster.local from pod dns-9753/dns-test-e5b79520-0ca7-4c10-a5e3-5fb707237f35 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 21:09:35.972: INFO: File jessie_udp@dns-test-service-3.dns-9753.svc.cluster.local from pod dns-9753/dns-test-e5b79520-0ca7-4c10-a5e3-5fb707237f35 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 21:09:35.972: INFO: Lookups using dns-9753/dns-test-e5b79520-0ca7-4c10-a5e3-5fb707237f35 failed for: [wheezy_udp@dns-test-service-3.dns-9753.svc.cluster.local jessie_udp@dns-test-service-3.dns-9753.svc.cluster.local] Mar 30 21:09:40.970: INFO: File wheezy_udp@dns-test-service-3.dns-9753.svc.cluster.local from pod dns-9753/dns-test-e5b79520-0ca7-4c10-a5e3-5fb707237f35 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 21:09:40.974: INFO: File jessie_udp@dns-test-service-3.dns-9753.svc.cluster.local from pod dns-9753/dns-test-e5b79520-0ca7-4c10-a5e3-5fb707237f35 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 21:09:40.974: INFO: Lookups using dns-9753/dns-test-e5b79520-0ca7-4c10-a5e3-5fb707237f35 failed for: [wheezy_udp@dns-test-service-3.dns-9753.svc.cluster.local jessie_udp@dns-test-service-3.dns-9753.svc.cluster.local] Mar 30 21:09:46.009: INFO: File wheezy_udp@dns-test-service-3.dns-9753.svc.cluster.local from pod dns-9753/dns-test-e5b79520-0ca7-4c10-a5e3-5fb707237f35 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 21:09:46.013: INFO: File jessie_udp@dns-test-service-3.dns-9753.svc.cluster.local from pod dns-9753/dns-test-e5b79520-0ca7-4c10-a5e3-5fb707237f35 contains 'foo.example.com. ' instead of 'bar.example.com.' Mar 30 21:09:46.013: INFO: Lookups using dns-9753/dns-test-e5b79520-0ca7-4c10-a5e3-5fb707237f35 failed for: [wheezy_udp@dns-test-service-3.dns-9753.svc.cluster.local jessie_udp@dns-test-service-3.dns-9753.svc.cluster.local] Mar 30 21:09:50.973: INFO: DNS probes using dns-test-e5b79520-0ca7-4c10-a5e3-5fb707237f35 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9753.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-9753.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-9753.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-9753.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 30 21:09:57.363: INFO: DNS probes using dns-test-535232d8-76fe-403b-b231-fd92b45f1dd5 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:09:57.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9753" for this suite. • [SLOW TEST:43.773 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":10,"skipped":224,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:09:57.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ff054ee5-e03d-403e-a6ca-ef5e5e80c8cc STEP: Creating a pod to test consume secrets Mar 30 21:09:57.933: INFO: Waiting up to 5m0s for pod "pod-secrets-b3c3eef0-80e1-451e-8f99-e79f4da74b6e" in namespace "secrets-4592" to be "success or failure" Mar 30 21:09:57.942: INFO: Pod "pod-secrets-b3c3eef0-80e1-451e-8f99-e79f4da74b6e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.331505ms Mar 30 21:09:59.955: INFO: Pod "pod-secrets-b3c3eef0-80e1-451e-8f99-e79f4da74b6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021858841s Mar 30 21:10:01.959: INFO: Pod "pod-secrets-b3c3eef0-80e1-451e-8f99-e79f4da74b6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025672793s STEP: Saw pod success Mar 30 21:10:01.959: INFO: Pod "pod-secrets-b3c3eef0-80e1-451e-8f99-e79f4da74b6e" satisfied condition "success or failure" Mar 30 21:10:01.962: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-b3c3eef0-80e1-451e-8f99-e79f4da74b6e container secret-volume-test: STEP: delete the pod Mar 30 21:10:02.008: INFO: Waiting for pod pod-secrets-b3c3eef0-80e1-451e-8f99-e79f4da74b6e to disappear Mar 30 21:10:02.020: INFO: Pod pod-secrets-b3c3eef0-80e1-451e-8f99-e79f4da74b6e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:10:02.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4592" for this suite. STEP: Destroying namespace "secret-namespace-4237" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":235,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:10:02.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 30 21:10:06.736: INFO: Successfully updated pod "annotationupdatecc741411-4250-4807-8f5c-bd5ea46b725f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:10:08.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2306" for this suite. • [SLOW TEST:6.698 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:10:08.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-5d333dee-3e25-404e-80cb-88c88b32e0f0 STEP: Creating a pod to test consume secrets Mar 30 21:10:08.829: INFO: Waiting up to 5m0s for pod "pod-secrets-f0c89e0f-a2a1-400b-8832-c4be615b2017" in namespace "secrets-5152" to be "success or failure" Mar 30 21:10:08.888: INFO: Pod "pod-secrets-f0c89e0f-a2a1-400b-8832-c4be615b2017": Phase="Pending", Reason="", readiness=false. Elapsed: 59.242772ms Mar 30 21:10:10.919: INFO: Pod "pod-secrets-f0c89e0f-a2a1-400b-8832-c4be615b2017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.089399575s Mar 30 21:10:12.922: INFO: Pod "pod-secrets-f0c89e0f-a2a1-400b-8832-c4be615b2017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093086084s STEP: Saw pod success Mar 30 21:10:12.922: INFO: Pod "pod-secrets-f0c89e0f-a2a1-400b-8832-c4be615b2017" satisfied condition "success or failure" Mar 30 21:10:12.925: INFO: Trying to get logs from node jerma-worker pod pod-secrets-f0c89e0f-a2a1-400b-8832-c4be615b2017 container secret-volume-test: STEP: delete the pod Mar 30 21:10:12.948: INFO: Waiting for pod pod-secrets-f0c89e0f-a2a1-400b-8832-c4be615b2017 to disappear Mar 30 21:10:12.954: INFO: Pod pod-secrets-f0c89e0f-a2a1-400b-8832-c4be615b2017 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:10:12.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5152" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":274,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:10:12.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 30 21:10:13.033: INFO: Waiting up to 5m0s for pod "pod-d4741306-35bb-4fab-a492-5445cb29007d" in namespace "emptydir-6122" to be "success or failure" Mar 30 21:10:13.038: INFO: Pod "pod-d4741306-35bb-4fab-a492-5445cb29007d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.675865ms Mar 30 21:10:15.085: INFO: Pod "pod-d4741306-35bb-4fab-a492-5445cb29007d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051703305s Mar 30 21:10:17.089: INFO: Pod "pod-d4741306-35bb-4fab-a492-5445cb29007d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055904696s STEP: Saw pod success Mar 30 21:10:17.089: INFO: Pod "pod-d4741306-35bb-4fab-a492-5445cb29007d" satisfied condition "success or failure" Mar 30 21:10:17.092: INFO: Trying to get logs from node jerma-worker2 pod pod-d4741306-35bb-4fab-a492-5445cb29007d container test-container: STEP: delete the pod Mar 30 21:10:17.141: INFO: Waiting for pod pod-d4741306-35bb-4fab-a492-5445cb29007d to disappear Mar 30 21:10:17.147: INFO: Pod pod-d4741306-35bb-4fab-a492-5445cb29007d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:10:17.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6122" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":294,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:10:17.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-49252be7-976a-4d3d-b0a7-a4a4316c7ba1 STEP: Creating a pod to test consume configMaps Mar 30 21:10:17.238: INFO: Waiting up to 5m0s for pod "pod-configmaps-68336138-1654-4725-a300-4e3c104738a6" in namespace "configmap-9202" to be "success or failure" Mar 30 21:10:17.269: INFO: Pod "pod-configmaps-68336138-1654-4725-a300-4e3c104738a6": Phase="Pending", Reason="", readiness=false. Elapsed: 30.174042ms Mar 30 21:10:19.273: INFO: Pod "pod-configmaps-68336138-1654-4725-a300-4e3c104738a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034397538s Mar 30 21:10:21.276: INFO: Pod "pod-configmaps-68336138-1654-4725-a300-4e3c104738a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037970304s STEP: Saw pod success Mar 30 21:10:21.276: INFO: Pod "pod-configmaps-68336138-1654-4725-a300-4e3c104738a6" satisfied condition "success or failure" Mar 30 21:10:21.279: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-68336138-1654-4725-a300-4e3c104738a6 container configmap-volume-test: STEP: delete the pod Mar 30 21:10:21.314: INFO: Waiting for pod pod-configmaps-68336138-1654-4725-a300-4e3c104738a6 to disappear Mar 30 21:10:21.327: INFO: Pod pod-configmaps-68336138-1654-4725-a300-4e3c104738a6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:10:21.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9202" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":298,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:10:21.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 30 21:10:22.026: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 30 21:10:24.036: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721199422, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721199422, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721199422, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721199422, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 30 21:10:27.066: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:10:27.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:10:28.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8561" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.100 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":16,"skipped":299,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:10:28.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-8b8e040e-dbec-4e68-8ac0-27a3409af38c STEP: Creating configMap with name cm-test-opt-upd-83f92fe0-befb-493c-8a88-626371b5ae8f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8b8e040e-dbec-4e68-8ac0-27a3409af38c STEP: Updating configmap cm-test-opt-upd-83f92fe0-befb-493c-8a88-626371b5ae8f STEP: Creating configMap with name cm-test-opt-create-d94b82ca-42f6-45fd-bb5e-e1f15ac172dc STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:11:44.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8025" for this suite. • [SLOW TEST:76.485 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":315,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:11:44.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 30 21:11:45.011: INFO: Waiting up to 5m0s for pod "downward-api-ac18aa2e-1772-48cd-92b8-45759419d57c" in namespace "downward-api-2694" to be "success or failure" Mar 30 21:11:45.030: INFO: Pod "downward-api-ac18aa2e-1772-48cd-92b8-45759419d57c": Phase="Pending", Reason="", readiness=false. Elapsed: 18.228178ms Mar 30 21:11:47.034: INFO: Pod "downward-api-ac18aa2e-1772-48cd-92b8-45759419d57c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021984783s Mar 30 21:11:49.038: INFO: Pod "downward-api-ac18aa2e-1772-48cd-92b8-45759419d57c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026245871s STEP: Saw pod success Mar 30 21:11:49.038: INFO: Pod "downward-api-ac18aa2e-1772-48cd-92b8-45759419d57c" satisfied condition "success or failure" Mar 30 21:11:49.040: INFO: Trying to get logs from node jerma-worker2 pod downward-api-ac18aa2e-1772-48cd-92b8-45759419d57c container dapi-container: STEP: delete the pod Mar 30 21:11:49.073: INFO: Waiting for pod downward-api-ac18aa2e-1772-48cd-92b8-45759419d57c to disappear Mar 30 21:11:49.077: INFO: Pod downward-api-ac18aa2e-1772-48cd-92b8-45759419d57c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:11:49.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2694" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":337,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:11:49.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-63b31a65-3244-49d3-b97c-302309182649 STEP: Creating a pod to test consume configMaps Mar 30 21:11:49.173: INFO: Waiting up to 5m0s for pod "pod-configmaps-5e159804-b839-4a16-bcaa-9e1b1b9c82fc" in namespace "configmap-4735" to be "success or failure" Mar 30 21:11:49.179: INFO: Pod "pod-configmaps-5e159804-b839-4a16-bcaa-9e1b1b9c82fc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.832103ms Mar 30 21:11:51.183: INFO: Pod "pod-configmaps-5e159804-b839-4a16-bcaa-9e1b1b9c82fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009832246s Mar 30 21:11:53.188: INFO: Pod "pod-configmaps-5e159804-b839-4a16-bcaa-9e1b1b9c82fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014164091s STEP: Saw pod success Mar 30 21:11:53.188: INFO: Pod "pod-configmaps-5e159804-b839-4a16-bcaa-9e1b1b9c82fc" satisfied condition "success or failure" Mar 30 21:11:53.191: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-5e159804-b839-4a16-bcaa-9e1b1b9c82fc container configmap-volume-test: STEP: delete the pod Mar 30 21:11:53.232: INFO: Waiting for pod pod-configmaps-5e159804-b839-4a16-bcaa-9e1b1b9c82fc to disappear Mar 30 21:11:53.285: INFO: Pod pod-configmaps-5e159804-b839-4a16-bcaa-9e1b1b9c82fc no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:11:53.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4735" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":402,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:11:53.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-6acc6ad6-5a13-4f21-89b6-70896ffb91ff STEP: Creating a pod to test consume configMaps Mar 30 21:11:53.423: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7bdbcdb7-1453-43e8-896c-ce339f3fad09" in namespace "projected-4668" to be "success or failure" Mar 30 21:11:53.431: INFO: Pod "pod-projected-configmaps-7bdbcdb7-1453-43e8-896c-ce339f3fad09": Phase="Pending", Reason="", readiness=false. Elapsed: 8.216404ms Mar 30 21:11:55.434: INFO: Pod "pod-projected-configmaps-7bdbcdb7-1453-43e8-896c-ce339f3fad09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011506385s Mar 30 21:11:57.439: INFO: Pod "pod-projected-configmaps-7bdbcdb7-1453-43e8-896c-ce339f3fad09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015912491s STEP: Saw pod success Mar 30 21:11:57.439: INFO: Pod "pod-projected-configmaps-7bdbcdb7-1453-43e8-896c-ce339f3fad09" satisfied condition "success or failure" Mar 30 21:11:57.442: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-7bdbcdb7-1453-43e8-896c-ce339f3fad09 container projected-configmap-volume-test: STEP: delete the pod Mar 30 21:11:57.557: INFO: Waiting for pod pod-projected-configmaps-7bdbcdb7-1453-43e8-896c-ce339f3fad09 to disappear Mar 30 21:11:57.633: INFO: Pod pod-projected-configmaps-7bdbcdb7-1453-43e8-896c-ce339f3fad09 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:11:57.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4668" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":422,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:11:57.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:11:57.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1094" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":431,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:11:57.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-72532afd-2f41-48fa-a6f0-bdedeb2c1ffc STEP: Creating a pod to test consume configMaps Mar 30 21:11:57.917: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e2e3b7ad-a90e-4b24-9094-246a0773a7b7" in namespace "projected-3423" to be "success or failure" Mar 30 21:11:57.926: INFO: Pod "pod-projected-configmaps-e2e3b7ad-a90e-4b24-9094-246a0773a7b7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.993702ms Mar 30 21:11:59.930: INFO: Pod "pod-projected-configmaps-e2e3b7ad-a90e-4b24-9094-246a0773a7b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012646335s Mar 30 21:12:01.934: INFO: Pod "pod-projected-configmaps-e2e3b7ad-a90e-4b24-9094-246a0773a7b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016784451s STEP: Saw pod success Mar 30 21:12:01.934: INFO: Pod "pod-projected-configmaps-e2e3b7ad-a90e-4b24-9094-246a0773a7b7" satisfied condition "success or failure" Mar 30 21:12:01.937: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-e2e3b7ad-a90e-4b24-9094-246a0773a7b7 container projected-configmap-volume-test: STEP: delete the pod Mar 30 21:12:01.968: INFO: Waiting for pod pod-projected-configmaps-e2e3b7ad-a90e-4b24-9094-246a0773a7b7 to disappear Mar 30 21:12:01.998: INFO: Pod pod-projected-configmaps-e2e3b7ad-a90e-4b24-9094-246a0773a7b7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:12:01.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3423" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":499,"failed":0} ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:12:02.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-d11c6a50-72da-4967-b047-8f48f3c29f8b STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:12:06.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6423" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":499,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:12:06.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-8293 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8293 to expose endpoints map[] Mar 30 21:12:06.377: INFO: Get endpoints failed (2.177968ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Mar 30 21:12:07.383: INFO: successfully validated that service multi-endpoint-test in namespace services-8293 exposes endpoints map[] (1.008260039s elapsed) STEP: Creating pod pod1 in namespace services-8293 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8293 to expose endpoints map[pod1:[100]] Mar 30 21:12:10.515: INFO: successfully validated that service multi-endpoint-test in namespace services-8293 exposes endpoints map[pod1:[100]] (3.125008598s elapsed) STEP: Creating pod pod2 in namespace services-8293 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8293 to expose endpoints map[pod1:[100] pod2:[101]] Mar 30 21:12:14.617: INFO: successfully validated that service multi-endpoint-test in namespace services-8293 exposes endpoints map[pod1:[100] pod2:[101]] (4.0982482s elapsed) STEP: Deleting pod pod1 in namespace services-8293 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8293 to expose endpoints map[pod2:[101]] Mar 30 21:12:15.683: INFO: successfully validated that service multi-endpoint-test in namespace services-8293 exposes endpoints map[pod2:[101]] (1.061766922s elapsed) STEP: Deleting pod pod2 in namespace services-8293 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8293 to expose endpoints map[] Mar 30 21:12:16.723: INFO: successfully validated that service multi-endpoint-test in namespace services-8293 exposes endpoints map[] (1.035518946s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:12:16.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8293" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.561 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":24,"skipped":507,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:12:16.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:12:20.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4401" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":519,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:12:20.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:12:21.033: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-1af71191-7743-4958-9bae-ffe4bf450e5c" in namespace "security-context-test-7976" to be "success or failure" Mar 30 21:12:21.038: INFO: Pod "alpine-nnp-false-1af71191-7743-4958-9bae-ffe4bf450e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191562ms Mar 30 21:12:23.041: INFO: Pod "alpine-nnp-false-1af71191-7743-4958-9bae-ffe4bf450e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007712785s Mar 30 21:12:25.052: INFO: Pod "alpine-nnp-false-1af71191-7743-4958-9bae-ffe4bf450e5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018510501s Mar 30 21:12:25.052: INFO: Pod "alpine-nnp-false-1af71191-7743-4958-9bae-ffe4bf450e5c" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:12:25.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7976" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":526,"failed":0} S ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:12:25.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:12:25.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-3124" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":27,"skipped":527,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:12:25.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 30 21:12:25.193: INFO: Waiting up to 5m0s for pod "downward-api-0e2aff83-23e6-43ae-8cf5-09b924e653d0" in namespace "downward-api-712" to be "success or failure" Mar 30 21:12:25.196: INFO: Pod "downward-api-0e2aff83-23e6-43ae-8cf5-09b924e653d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.934631ms Mar 30 21:12:27.204: INFO: Pod "downward-api-0e2aff83-23e6-43ae-8cf5-09b924e653d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010479078s Mar 30 21:12:29.208: INFO: Pod "downward-api-0e2aff83-23e6-43ae-8cf5-09b924e653d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014919801s STEP: Saw pod success Mar 30 21:12:29.208: INFO: Pod "downward-api-0e2aff83-23e6-43ae-8cf5-09b924e653d0" satisfied condition "success or failure" Mar 30 21:12:29.211: INFO: Trying to get logs from node jerma-worker2 pod downward-api-0e2aff83-23e6-43ae-8cf5-09b924e653d0 container dapi-container: STEP: delete the pod Mar 30 21:12:29.276: INFO: Waiting for pod downward-api-0e2aff83-23e6-43ae-8cf5-09b924e653d0 to disappear Mar 30 21:12:29.280: INFO: Pod downward-api-0e2aff83-23e6-43ae-8cf5-09b924e653d0 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:12:29.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-712" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":536,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:12:29.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Mar 30 21:12:29.334: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Mar 30 21:12:29.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2002' Mar 30 21:12:31.860: INFO: stderr: "" Mar 30 21:12:31.861: INFO: stdout: "service/agnhost-slave created\n" Mar 30 21:12:31.861: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Mar 30 21:12:31.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2002' Mar 30 21:12:32.131: INFO: stderr: "" Mar 30 21:12:32.131: INFO: stdout: "service/agnhost-master created\n" Mar 30 21:12:32.132: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Mar 30 21:12:32.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2002' Mar 30 21:12:32.458: INFO: stderr: "" Mar 30 21:12:32.458: INFO: stdout: "service/frontend created\n" Mar 30 21:12:32.459: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Mar 30 21:12:32.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2002' Mar 30 21:12:32.705: INFO: stderr: "" Mar 30 21:12:32.705: INFO: stdout: "deployment.apps/frontend created\n" Mar 30 21:12:32.705: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 30 21:12:32.705: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2002' Mar 30 21:12:33.005: INFO: stderr: "" Mar 30 21:12:33.005: INFO: stdout: "deployment.apps/agnhost-master created\n" Mar 30 21:12:33.005: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Mar 30 21:12:33.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2002' Mar 30 21:12:33.318: INFO: stderr: "" Mar 30 21:12:33.318: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Mar 30 21:12:33.318: INFO: Waiting for all frontend pods to be Running. Mar 30 21:12:38.368: INFO: Waiting for frontend to serve content. Mar 30 21:12:39.439: INFO: Trying to add a new entry to the guestbook. Mar 30 21:12:39.453: INFO: Verifying that added entry can be retrieved. Mar 30 21:12:39.483: INFO: Failed to get response from guestbook. err: , response: {"data":""} STEP: using delete to clean up resources Mar 30 21:12:44.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2002' Mar 30 21:12:44.666: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 30 21:12:44.666: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Mar 30 21:12:44.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2002' Mar 30 21:12:44.819: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 30 21:12:44.820: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 30 21:12:44.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2002' Mar 30 21:12:45.023: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 30 21:12:45.023: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 30 21:12:45.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2002' Mar 30 21:12:45.124: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 30 21:12:45.124: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Mar 30 21:12:45.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2002' Mar 30 21:12:45.220: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 30 21:12:45.220: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Mar 30 21:12:45.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2002' Mar 30 21:12:45.316: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 30 21:12:45.316: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:12:45.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2002" for this suite. • [SLOW TEST:16.034 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:386 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":29,"skipped":539,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:12:45.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 30 21:12:45.386: INFO: Waiting up to 5m0s for pod "pod-1ad5e2da-6a9d-4cd0-b09c-c6396ccef42e" in namespace "emptydir-809" to be "success or failure" Mar 30 21:12:45.436: INFO: Pod "pod-1ad5e2da-6a9d-4cd0-b09c-c6396ccef42e": Phase="Pending", Reason="", readiness=false. Elapsed: 49.98091ms Mar 30 21:12:47.439: INFO: Pod "pod-1ad5e2da-6a9d-4cd0-b09c-c6396ccef42e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052984573s Mar 30 21:12:49.442: INFO: Pod "pod-1ad5e2da-6a9d-4cd0-b09c-c6396ccef42e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056142656s Mar 30 21:12:51.446: INFO: Pod "pod-1ad5e2da-6a9d-4cd0-b09c-c6396ccef42e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06003643s STEP: Saw pod success Mar 30 21:12:51.446: INFO: Pod "pod-1ad5e2da-6a9d-4cd0-b09c-c6396ccef42e" satisfied condition "success or failure" Mar 30 21:12:51.450: INFO: Trying to get logs from node jerma-worker2 pod pod-1ad5e2da-6a9d-4cd0-b09c-c6396ccef42e container test-container: STEP: delete the pod Mar 30 21:12:51.517: INFO: Waiting for pod pod-1ad5e2da-6a9d-4cd0-b09c-c6396ccef42e to disappear Mar 30 21:12:51.549: INFO: Pod pod-1ad5e2da-6a9d-4cd0-b09c-c6396ccef42e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:12:51.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-809" for this suite. • [SLOW TEST:6.233 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":543,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:12:51.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 30 21:12:51.600: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 30 21:12:51.616: INFO: Waiting for terminating namespaces to be deleted... Mar 30 21:12:51.619: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 30 21:12:51.625: INFO: agnhost-slave-774cfc759f-hwgvf from kubectl-2002 started at 2020-03-30 21:12:33 +0000 UTC (1 container statuses recorded) Mar 30 21:12:51.625: INFO: Container slave ready: false, restart count 0 Mar 30 21:12:51.625: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 21:12:51.625: INFO: Container kindnet-cni ready: true, restart count 0 Mar 30 21:12:51.625: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 21:12:51.625: INFO: Container kube-proxy ready: true, restart count 0 Mar 30 21:12:51.625: INFO: frontend-6c5f89d5d4-hj4zc from kubectl-2002 started at 2020-03-30 21:12:32 +0000 UTC (1 container statuses recorded) Mar 30 21:12:51.625: INFO: Container guestbook-frontend ready: false, restart count 0 Mar 30 21:12:51.625: INFO: busybox-scheduling-6d641dd6-8351-4289-90d5-916b39538a01 from kubelet-test-4401 started at 2020-03-30 21:12:16 +0000 UTC (1 container statuses recorded) Mar 30 21:12:51.625: INFO: Container busybox-scheduling-6d641dd6-8351-4289-90d5-916b39538a01 ready: true, restart count 0 Mar 30 21:12:51.625: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 30 21:12:51.637: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Mar 30 21:12:51.637: INFO: Container kube-hunter ready: false, restart count 0 Mar 30 21:12:51.637: INFO: agnhost-master-74c46fb7d4-q94pw from kubectl-2002 started at 2020-03-30 21:12:33 +0000 UTC (1 container statuses recorded) Mar 30 21:12:51.637: INFO: Container master ready: false, restart count 0 Mar 30 21:12:51.637: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 21:12:51.637: INFO: Container kindnet-cni ready: true, restart count 0 Mar 30 21:12:51.637: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Mar 30 21:12:51.637: INFO: Container kube-bench ready: false, restart count 0 Mar 30 21:12:51.637: INFO: frontend-6c5f89d5d4-t2k6k from kubectl-2002 started at 2020-03-30 21:12:32 +0000 UTC (1 container statuses recorded) Mar 30 21:12:51.637: INFO: Container guestbook-frontend ready: false, restart count 0 Mar 30 21:12:51.637: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 21:12:51.637: INFO: Container kube-proxy ready: true, restart count 0 Mar 30 21:12:51.637: INFO: frontend-6c5f89d5d4-jg6jq from kubectl-2002 started at 2020-03-30 21:12:32 +0000 UTC (1 container statuses recorded) Mar 30 21:12:51.637: INFO: Container guestbook-frontend ready: false, restart count 0 Mar 30 21:12:51.637: INFO: agnhost-slave-774cfc759f-pxsh5 from kubectl-2002 started at 2020-03-30 21:12:33 +0000 UTC (1 container statuses recorded) Mar 30 21:12:51.637: INFO: Container slave ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16013151e866bf7c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:12:52.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4685" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":31,"skipped":561,"failed":0} S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:12:52.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Mar 30 21:12:56.822: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Mar 30 21:13:11.931: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:13:11.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8506" for this suite. • [SLOW TEST:19.258 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":32,"skipped":562,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:13:11.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-bwdj STEP: Creating a pod to test atomic-volume-subpath Mar 30 21:13:12.035: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bwdj" in namespace "subpath-2099" to be "success or failure" Mar 30 21:13:12.052: INFO: Pod "pod-subpath-test-configmap-bwdj": Phase="Pending", Reason="", readiness=false. Elapsed: 16.730989ms Mar 30 21:13:14.057: INFO: Pod "pod-subpath-test-configmap-bwdj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021269762s Mar 30 21:13:16.061: INFO: Pod "pod-subpath-test-configmap-bwdj": Phase="Running", Reason="", readiness=true. Elapsed: 4.025907114s Mar 30 21:13:18.066: INFO: Pod "pod-subpath-test-configmap-bwdj": Phase="Running", Reason="", readiness=true. Elapsed: 6.030426594s Mar 30 21:13:20.070: INFO: Pod "pod-subpath-test-configmap-bwdj": Phase="Running", Reason="", readiness=true. Elapsed: 8.034772851s Mar 30 21:13:22.075: INFO: Pod "pod-subpath-test-configmap-bwdj": Phase="Running", Reason="", readiness=true. Elapsed: 10.039301573s Mar 30 21:13:24.079: INFO: Pod "pod-subpath-test-configmap-bwdj": Phase="Running", Reason="", readiness=true. Elapsed: 12.043110511s Mar 30 21:13:26.083: INFO: Pod "pod-subpath-test-configmap-bwdj": Phase="Running", Reason="", readiness=true. Elapsed: 14.047636428s Mar 30 21:13:28.087: INFO: Pod "pod-subpath-test-configmap-bwdj": Phase="Running", Reason="", readiness=true. Elapsed: 16.051535867s Mar 30 21:13:30.091: INFO: Pod "pod-subpath-test-configmap-bwdj": Phase="Running", Reason="", readiness=true. Elapsed: 18.055685297s Mar 30 21:13:32.096: INFO: Pod "pod-subpath-test-configmap-bwdj": Phase="Running", Reason="", readiness=true. Elapsed: 20.06014023s Mar 30 21:13:34.100: INFO: Pod "pod-subpath-test-configmap-bwdj": Phase="Running", Reason="", readiness=true. Elapsed: 22.0642499s Mar 30 21:13:36.104: INFO: Pod "pod-subpath-test-configmap-bwdj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.068966046s STEP: Saw pod success Mar 30 21:13:36.105: INFO: Pod "pod-subpath-test-configmap-bwdj" satisfied condition "success or failure" Mar 30 21:13:36.111: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-bwdj container test-container-subpath-configmap-bwdj: STEP: delete the pod Mar 30 21:13:36.181: INFO: Waiting for pod pod-subpath-test-configmap-bwdj to disappear Mar 30 21:13:36.188: INFO: Pod pod-subpath-test-configmap-bwdj no longer exists STEP: Deleting pod pod-subpath-test-configmap-bwdj Mar 30 21:13:36.188: INFO: Deleting pod "pod-subpath-test-configmap-bwdj" in namespace "subpath-2099" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:13:36.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2099" for this suite. • [SLOW TEST:24.256 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":33,"skipped":567,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:13:36.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 30 21:13:40.388: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:13:40.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9626" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":598,"failed":0} ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:13:40.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 30 21:13:40.498: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 30 21:13:40.507: INFO: Waiting for terminating namespaces to be deleted... Mar 30 21:13:40.509: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 30 21:13:40.513: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 21:13:40.513: INFO: Container kindnet-cni ready: true, restart count 0 Mar 30 21:13:40.513: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 21:13:40.513: INFO: Container kube-proxy ready: true, restart count 0 Mar 30 21:13:40.513: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 30 21:13:40.517: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Mar 30 21:13:40.517: INFO: Container kube-hunter ready: false, restart count 0 Mar 30 21:13:40.517: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 21:13:40.517: INFO: Container kindnet-cni ready: true, restart count 0 Mar 30 21:13:40.517: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Mar 30 21:13:40.517: INFO: Container kube-bench ready: false, restart count 0 Mar 30 21:13:40.517: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 21:13:40.517: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-7bf2ff93-3811-4156-9616-432f6cb3fb6b 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-7bf2ff93-3811-4156-9616-432f6cb3fb6b off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-7bf2ff93-3811-4156-9616-432f6cb3fb6b [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:18:48.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8855" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.275 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":35,"skipped":598,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:18:48.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 30 21:18:48.796: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f353be9-5740-4988-adb9-b76ce1424ec1" in namespace "projected-7907" to be "success or failure" Mar 30 21:18:48.803: INFO: Pod "downwardapi-volume-9f353be9-5740-4988-adb9-b76ce1424ec1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411698ms Mar 30 21:18:50.806: INFO: Pod "downwardapi-volume-9f353be9-5740-4988-adb9-b76ce1424ec1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010164879s Mar 30 21:18:52.810: INFO: Pod "downwardapi-volume-9f353be9-5740-4988-adb9-b76ce1424ec1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013497401s STEP: Saw pod success Mar 30 21:18:52.810: INFO: Pod "downwardapi-volume-9f353be9-5740-4988-adb9-b76ce1424ec1" satisfied condition "success or failure" Mar 30 21:18:52.812: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9f353be9-5740-4988-adb9-b76ce1424ec1 container client-container: STEP: delete the pod Mar 30 21:18:52.845: INFO: Waiting for pod downwardapi-volume-9f353be9-5740-4988-adb9-b76ce1424ec1 to disappear Mar 30 21:18:52.850: INFO: Pod downwardapi-volume-9f353be9-5740-4988-adb9-b76ce1424ec1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:18:52.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7907" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":624,"failed":0} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:18:52.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 30 21:19:01.055: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 30 21:19:01.059: INFO: Pod pod-with-prestop-http-hook still exists Mar 30 21:19:03.060: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 30 21:19:03.063: INFO: Pod pod-with-prestop-http-hook still exists Mar 30 21:19:05.060: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Mar 30 21:19:05.063: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:19:05.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6517" for this suite. • [SLOW TEST:12.238 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":628,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:19:05.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1305 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-1305 STEP: Creating statefulset with conflicting port in namespace statefulset-1305 STEP: Waiting until pod test-pod will start running in namespace statefulset-1305 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1305 Mar 30 21:19:09.282: INFO: Observed stateful pod in namespace: statefulset-1305, name: ss-0, uid: 34452067-e49c-4a1e-9eba-d21669782263, status phase: Pending. Waiting for statefulset controller to delete. Mar 30 21:19:09.287: INFO: Observed stateful pod in namespace: statefulset-1305, name: ss-0, uid: 34452067-e49c-4a1e-9eba-d21669782263, status phase: Failed. Waiting for statefulset controller to delete. Mar 30 21:19:09.295: INFO: Observed stateful pod in namespace: statefulset-1305, name: ss-0, uid: 34452067-e49c-4a1e-9eba-d21669782263, status phase: Failed. Waiting for statefulset controller to delete. Mar 30 21:19:09.341: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1305 STEP: Removing pod with conflicting port in namespace statefulset-1305 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1305 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 30 21:19:15.477: INFO: Deleting all statefulset in ns statefulset-1305 Mar 30 21:19:15.480: INFO: Scaling statefulset ss to 0 Mar 30 21:19:25.501: INFO: Waiting for statefulset status.replicas updated to 0 Mar 30 21:19:25.504: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:19:25.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1305" for this suite. • [SLOW TEST:20.454 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":38,"skipped":634,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:19:25.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-a247fa28-adbd-4cbb-9883-abe6009a13b3 STEP: Creating a pod to test consume configMaps Mar 30 21:19:25.625: INFO: Waiting up to 5m0s for pod "pod-configmaps-df879336-acad-4cd8-9f72-052d43c4bd70" in namespace "configmap-6264" to be "success or failure" Mar 30 21:19:25.642: INFO: Pod "pod-configmaps-df879336-acad-4cd8-9f72-052d43c4bd70": Phase="Pending", Reason="", readiness=false. Elapsed: 16.632379ms Mar 30 21:19:27.687: INFO: Pod "pod-configmaps-df879336-acad-4cd8-9f72-052d43c4bd70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061749196s Mar 30 21:19:29.691: INFO: Pod "pod-configmaps-df879336-acad-4cd8-9f72-052d43c4bd70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065220131s STEP: Saw pod success Mar 30 21:19:29.691: INFO: Pod "pod-configmaps-df879336-acad-4cd8-9f72-052d43c4bd70" satisfied condition "success or failure" Mar 30 21:19:29.693: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-df879336-acad-4cd8-9f72-052d43c4bd70 container configmap-volume-test: STEP: delete the pod Mar 30 21:19:29.714: INFO: Waiting for pod pod-configmaps-df879336-acad-4cd8-9f72-052d43c4bd70 to disappear Mar 30 21:19:29.719: INFO: Pod pod-configmaps-df879336-acad-4cd8-9f72-052d43c4bd70 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:19:29.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6264" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":664,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:19:29.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-e3d6478d-5a12-4252-a10e-89a7b7980395 STEP: Creating a pod to test consume configMaps Mar 30 21:19:29.825: INFO: Waiting up to 5m0s for pod "pod-configmaps-ade1ccdf-89ee-4138-96ae-772f8c99446d" in namespace "configmap-3242" to be "success or failure" Mar 30 21:19:29.848: INFO: Pod "pod-configmaps-ade1ccdf-89ee-4138-96ae-772f8c99446d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.288351ms Mar 30 21:19:31.987: INFO: Pod "pod-configmaps-ade1ccdf-89ee-4138-96ae-772f8c99446d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161448335s Mar 30 21:19:33.991: INFO: Pod "pod-configmaps-ade1ccdf-89ee-4138-96ae-772f8c99446d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.165493928s STEP: Saw pod success Mar 30 21:19:33.991: INFO: Pod "pod-configmaps-ade1ccdf-89ee-4138-96ae-772f8c99446d" satisfied condition "success or failure" Mar 30 21:19:33.994: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-ade1ccdf-89ee-4138-96ae-772f8c99446d container configmap-volume-test: STEP: delete the pod Mar 30 21:19:34.039: INFO: Waiting for pod pod-configmaps-ade1ccdf-89ee-4138-96ae-772f8c99446d to disappear Mar 30 21:19:34.043: INFO: Pod pod-configmaps-ade1ccdf-89ee-4138-96ae-772f8c99446d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:19:34.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3242" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":677,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:19:34.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:19:45.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4051" for this suite. • [SLOW TEST:11.251 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":41,"skipped":684,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:19:45.303: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-de315e52-8a87-4734-b5b6-948be667ce46 STEP: Creating a pod to test consume secrets Mar 30 21:19:45.388: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-948087d5-aae3-4f2f-aee7-c9dd2edf9817" in namespace "projected-1745" to be "success or failure" Mar 30 21:19:45.396: INFO: Pod "pod-projected-secrets-948087d5-aae3-4f2f-aee7-c9dd2edf9817": Phase="Pending", Reason="", readiness=false. Elapsed: 7.624842ms Mar 30 21:19:47.400: INFO: Pod "pod-projected-secrets-948087d5-aae3-4f2f-aee7-c9dd2edf9817": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011547207s Mar 30 21:19:49.404: INFO: Pod "pod-projected-secrets-948087d5-aae3-4f2f-aee7-c9dd2edf9817": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015633783s STEP: Saw pod success Mar 30 21:19:49.404: INFO: Pod "pod-projected-secrets-948087d5-aae3-4f2f-aee7-c9dd2edf9817" satisfied condition "success or failure" Mar 30 21:19:49.407: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-948087d5-aae3-4f2f-aee7-c9dd2edf9817 container projected-secret-volume-test: STEP: delete the pod Mar 30 21:19:49.440: INFO: Waiting for pod pod-projected-secrets-948087d5-aae3-4f2f-aee7-c9dd2edf9817 to disappear Mar 30 21:19:49.460: INFO: Pod pod-projected-secrets-948087d5-aae3-4f2f-aee7-c9dd2edf9817 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:19:49.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1745" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":697,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:19:49.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Mar 30 21:19:49.548: INFO: Waiting up to 5m0s for pod "var-expansion-865bb6b2-ccb3-4225-b6af-287b59d0d63c" in namespace "var-expansion-6280" to be "success or failure" Mar 30 21:19:49.565: INFO: Pod "var-expansion-865bb6b2-ccb3-4225-b6af-287b59d0d63c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.451362ms Mar 30 21:19:51.569: INFO: Pod "var-expansion-865bb6b2-ccb3-4225-b6af-287b59d0d63c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020813971s Mar 30 21:19:53.574: INFO: Pod "var-expansion-865bb6b2-ccb3-4225-b6af-287b59d0d63c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025198137s STEP: Saw pod success Mar 30 21:19:53.574: INFO: Pod "var-expansion-865bb6b2-ccb3-4225-b6af-287b59d0d63c" satisfied condition "success or failure" Mar 30 21:19:53.577: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-865bb6b2-ccb3-4225-b6af-287b59d0d63c container dapi-container: STEP: delete the pod Mar 30 21:19:53.640: INFO: Waiting for pod var-expansion-865bb6b2-ccb3-4225-b6af-287b59d0d63c to disappear Mar 30 21:19:53.717: INFO: Pod var-expansion-865bb6b2-ccb3-4225-b6af-287b59d0d63c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:19:53.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6280" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":723,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:19:53.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Mar 30 21:19:53.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5395 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Mar 30 21:19:56.919: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0330 21:19:56.824496 299 log.go:172] (0xc000bce160) (0xc0007b4140) Create stream\nI0330 21:19:56.824565 299 log.go:172] (0xc000bce160) (0xc0007b4140) Stream added, broadcasting: 1\nI0330 21:19:56.827749 299 log.go:172] (0xc000bce160) Reply frame received for 1\nI0330 21:19:56.827794 299 log.go:172] (0xc000bce160) (0xc000a580a0) Create stream\nI0330 21:19:56.827808 299 log.go:172] (0xc000bce160) (0xc000a580a0) Stream added, broadcasting: 3\nI0330 21:19:56.828765 299 log.go:172] (0xc000bce160) Reply frame received for 3\nI0330 21:19:56.828801 299 log.go:172] (0xc000bce160) (0xc0007b41e0) Create stream\nI0330 21:19:56.828811 299 log.go:172] (0xc000bce160) (0xc0007b41e0) Stream added, broadcasting: 5\nI0330 21:19:56.829711 299 log.go:172] (0xc000bce160) Reply frame received for 5\nI0330 21:19:56.829767 299 log.go:172] (0xc000bce160) (0xc000816000) Create stream\nI0330 21:19:56.829786 299 log.go:172] (0xc000bce160) (0xc000816000) Stream added, broadcasting: 7\nI0330 21:19:56.830744 299 log.go:172] (0xc000bce160) Reply frame received for 7\nI0330 21:19:56.830995 299 log.go:172] (0xc000a580a0) (3) Writing data frame\nI0330 21:19:56.831110 299 log.go:172] (0xc000a580a0) (3) Writing data frame\nI0330 21:19:56.832012 299 log.go:172] (0xc000bce160) Data frame received for 5\nI0330 21:19:56.832033 299 log.go:172] (0xc0007b41e0) (5) Data frame handling\nI0330 21:19:56.832049 299 log.go:172] (0xc0007b41e0) (5) Data frame sent\nI0330 21:19:56.832558 299 log.go:172] (0xc000bce160) Data frame received for 5\nI0330 21:19:56.832578 299 log.go:172] (0xc0007b41e0) (5) Data frame handling\nI0330 21:19:56.832594 299 log.go:172] (0xc0007b41e0) (5) Data frame sent\nI0330 21:19:56.883536 299 log.go:172] (0xc000bce160) Data frame received for 5\nI0330 21:19:56.883696 299 log.go:172] (0xc0007b41e0) (5) Data frame handling\nI0330 21:19:56.883743 299 log.go:172] (0xc000bce160) Data frame received for 7\nI0330 21:19:56.883760 299 log.go:172] (0xc000816000) (7) Data frame handling\nI0330 21:19:56.884021 299 log.go:172] (0xc000bce160) Data frame received for 1\nI0330 21:19:56.884068 299 log.go:172] (0xc000bce160) (0xc000a580a0) Stream removed, broadcasting: 3\nI0330 21:19:56.884208 299 log.go:172] (0xc0007b4140) (1) Data frame handling\nI0330 21:19:56.884233 299 log.go:172] (0xc0007b4140) (1) Data frame sent\nI0330 21:19:56.884248 299 log.go:172] (0xc000bce160) (0xc0007b4140) Stream removed, broadcasting: 1\nI0330 21:19:56.884271 299 log.go:172] (0xc000bce160) Go away received\nI0330 21:19:56.884789 299 log.go:172] (0xc000bce160) (0xc0007b4140) Stream removed, broadcasting: 1\nI0330 21:19:56.884814 299 log.go:172] (0xc000bce160) (0xc000a580a0) Stream removed, broadcasting: 3\nI0330 21:19:56.884826 299 log.go:172] (0xc000bce160) (0xc0007b41e0) Stream removed, broadcasting: 5\nI0330 21:19:56.884839 299 log.go:172] (0xc000bce160) (0xc000816000) Stream removed, broadcasting: 7\n" Mar 30 21:19:56.919: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:19:58.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5395" for this suite. • [SLOW TEST:5.206 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1944 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":44,"skipped":725,"failed":0} S ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:19:58.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:20:13.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7200" for this suite. • [SLOW TEST:14.112 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":45,"skipped":726,"failed":0} S ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:20:13.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Mar 30 21:20:18.166: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:20:18.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1611" for this suite. • [SLOW TEST:5.227 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":46,"skipped":727,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:20:18.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-116/configmap-test-bb8944a7-10ef-4679-ae06-e7ab5e986c4f STEP: Creating a pod to test consume configMaps Mar 30 21:20:18.429: INFO: Waiting up to 5m0s for pod "pod-configmaps-d9d58a57-5da2-45ff-8cc2-7c405a8183fb" in namespace "configmap-116" to be "success or failure" Mar 30 21:20:18.445: INFO: Pod "pod-configmaps-d9d58a57-5da2-45ff-8cc2-7c405a8183fb": Phase="Pending", Reason="", readiness=false. Elapsed: 15.947189ms Mar 30 21:20:20.449: INFO: Pod "pod-configmaps-d9d58a57-5da2-45ff-8cc2-7c405a8183fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019150787s Mar 30 21:20:22.452: INFO: Pod "pod-configmaps-d9d58a57-5da2-45ff-8cc2-7c405a8183fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022880246s STEP: Saw pod success Mar 30 21:20:22.452: INFO: Pod "pod-configmaps-d9d58a57-5da2-45ff-8cc2-7c405a8183fb" satisfied condition "success or failure" Mar 30 21:20:22.455: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-d9d58a57-5da2-45ff-8cc2-7c405a8183fb container env-test: STEP: delete the pod Mar 30 21:20:22.512: INFO: Waiting for pod pod-configmaps-d9d58a57-5da2-45ff-8cc2-7c405a8183fb to disappear Mar 30 21:20:22.567: INFO: Pod pod-configmaps-d9d58a57-5da2-45ff-8cc2-7c405a8183fb no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:20:22.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-116" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":744,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:20:22.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1897 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 30 21:20:22.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2275' Mar 30 21:20:22.734: INFO: stderr: "" Mar 30 21:20:22.734: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Mar 30 21:20:27.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2275 -o json' Mar 30 21:20:27.875: INFO: stderr: "" Mar 30 21:20:27.875: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-03-30T21:20:22Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2275\",\n \"resourceVersion\": \"4050718\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2275/pods/e2e-test-httpd-pod\",\n \"uid\": \"9b911321-f40b-457c-80ca-d1051a6afbf7\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-cglsz\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-cglsz\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-cglsz\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-30T21:20:22Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-30T21:20:25Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-30T21:20:25Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-03-30T21:20:22Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://41bdd02c8ab3132dd1ee3d9bf0d4c38b11e4ce5332fb42d7f5fdd202f7500849\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-03-30T21:20:25Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.156\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.156\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-03-30T21:20:22Z\"\n }\n}\n" STEP: replace the image in the pod Mar 30 21:20:27.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2275' Mar 30 21:20:28.111: INFO: stderr: "" Mar 30 21:20:28.111: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1902 Mar 30 21:20:28.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2275' Mar 30 21:20:39.230: INFO: stderr: "" Mar 30 21:20:39.230: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:20:39.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2275" for this suite. • [SLOW TEST:16.663 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1893 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":48,"skipped":759,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:20:39.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:20:39.362: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Mar 30 21:20:39.367: INFO: Number of nodes with available pods: 0 Mar 30 21:20:39.367: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Mar 30 21:20:39.415: INFO: Number of nodes with available pods: 0 Mar 30 21:20:39.415: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:20:40.420: INFO: Number of nodes with available pods: 0 Mar 30 21:20:40.420: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:20:41.419: INFO: Number of nodes with available pods: 0 Mar 30 21:20:41.419: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:20:42.419: INFO: Number of nodes with available pods: 1 Mar 30 21:20:42.419: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Mar 30 21:20:42.451: INFO: Number of nodes with available pods: 1 Mar 30 21:20:42.451: INFO: Number of running nodes: 0, number of available pods: 1 Mar 30 21:20:43.455: INFO: Number of nodes with available pods: 0 Mar 30 21:20:43.455: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Mar 30 21:20:43.480: INFO: Number of nodes with available pods: 0 Mar 30 21:20:43.481: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:20:44.483: INFO: Number of nodes with available pods: 0 Mar 30 21:20:44.483: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:20:45.485: INFO: Number of nodes with available pods: 0 Mar 30 21:20:45.485: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:20:46.485: INFO: Number of nodes with available pods: 0 Mar 30 21:20:46.485: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:20:47.485: INFO: Number of nodes with available pods: 0 Mar 30 21:20:47.485: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:20:48.485: INFO: Number of nodes with available pods: 0 Mar 30 21:20:48.485: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:20:49.485: INFO: Number of nodes with available pods: 1 Mar 30 21:20:49.485: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-114, will wait for the garbage collector to delete the pods Mar 30 21:20:49.549: INFO: Deleting DaemonSet.extensions daemon-set took: 6.419703ms Mar 30 21:20:49.850: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.226642ms Mar 30 21:20:59.253: INFO: Number of nodes with available pods: 0 Mar 30 21:20:59.253: INFO: Number of running nodes: 0, number of available pods: 0 Mar 30 21:20:59.256: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-114/daemonsets","resourceVersion":"4050903"},"items":null} Mar 30 21:20:59.259: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-114/pods","resourceVersion":"4050903"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:20:59.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-114" for this suite. • [SLOW TEST:20.054 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":49,"skipped":789,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:20:59.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-a5bd4bd0-79dd-4690-bf45-3fa2354da5cc STEP: Creating a pod to test consume secrets Mar 30 21:20:59.371: INFO: Waiting up to 5m0s for pod "pod-secrets-253c13fc-050c-43d5-a5ad-73504a0cc1b0" in namespace "secrets-9320" to be "success or failure" Mar 30 21:20:59.375: INFO: Pod "pod-secrets-253c13fc-050c-43d5-a5ad-73504a0cc1b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.324039ms Mar 30 21:21:01.380: INFO: Pod "pod-secrets-253c13fc-050c-43d5-a5ad-73504a0cc1b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008643876s Mar 30 21:21:03.383: INFO: Pod "pod-secrets-253c13fc-050c-43d5-a5ad-73504a0cc1b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012225403s STEP: Saw pod success Mar 30 21:21:03.383: INFO: Pod "pod-secrets-253c13fc-050c-43d5-a5ad-73504a0cc1b0" satisfied condition "success or failure" Mar 30 21:21:03.387: INFO: Trying to get logs from node jerma-worker pod pod-secrets-253c13fc-050c-43d5-a5ad-73504a0cc1b0 container secret-volume-test: STEP: delete the pod Mar 30 21:21:03.413: INFO: Waiting for pod pod-secrets-253c13fc-050c-43d5-a5ad-73504a0cc1b0 to disappear Mar 30 21:21:03.416: INFO: Pod pod-secrets-253c13fc-050c-43d5-a5ad-73504a0cc1b0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:21:03.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9320" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":816,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:21:03.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Mar 30 21:21:03.605: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4477 /api/v1/namespaces/watch-4477/configmaps/e2e-watch-test-resource-version 5af9afc2-a141-41fe-8c9c-77af5b9a3b38 4050942 0 2020-03-30 21:21:03 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 30 21:21:03.605: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-4477 /api/v1/namespaces/watch-4477/configmaps/e2e-watch-test-resource-version 5af9afc2-a141-41fe-8c9c-77af5b9a3b38 4050943 0 2020-03-30 21:21:03 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:21:03.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4477" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":51,"skipped":837,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:21:03.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2626 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 30 21:21:03.687: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 30 21:21:27.788: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.160 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2626 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 21:21:27.788: INFO: >>> kubeConfig: /root/.kube/config I0330 21:21:27.821270 6 log.go:172] (0xc001758210) (0xc0028c2b40) Create stream I0330 21:21:27.821312 6 log.go:172] (0xc001758210) (0xc0028c2b40) Stream added, broadcasting: 1 I0330 21:21:27.823491 6 log.go:172] (0xc001758210) Reply frame received for 1 I0330 21:21:27.823529 6 log.go:172] (0xc001758210) (0xc002850140) Create stream I0330 21:21:27.823541 6 log.go:172] (0xc001758210) (0xc002850140) Stream added, broadcasting: 3 I0330 21:21:27.824333 6 log.go:172] (0xc001758210) Reply frame received for 3 I0330 21:21:27.824372 6 log.go:172] (0xc001758210) (0xc002850280) Create stream I0330 21:21:27.824390 6 log.go:172] (0xc001758210) (0xc002850280) Stream added, broadcasting: 5 I0330 21:21:27.825394 6 log.go:172] (0xc001758210) Reply frame received for 5 I0330 21:21:28.878456 6 log.go:172] (0xc001758210) Data frame received for 3 I0330 21:21:28.878505 6 log.go:172] (0xc002850140) (3) Data frame handling I0330 21:21:28.878547 6 log.go:172] (0xc002850140) (3) Data frame sent I0330 21:21:28.878649 6 log.go:172] (0xc001758210) Data frame received for 3 I0330 21:21:28.878689 6 log.go:172] (0xc002850140) (3) Data frame handling I0330 21:21:28.879096 6 log.go:172] (0xc001758210) Data frame received for 5 I0330 21:21:28.879133 6 log.go:172] (0xc002850280) (5) Data frame handling I0330 21:21:28.881503 6 log.go:172] (0xc001758210) Data frame received for 1 I0330 21:21:28.881592 6 log.go:172] (0xc0028c2b40) (1) Data frame handling I0330 21:21:28.881628 6 log.go:172] (0xc0028c2b40) (1) Data frame sent I0330 21:21:28.881658 6 log.go:172] (0xc001758210) (0xc0028c2b40) Stream removed, broadcasting: 1 I0330 21:21:28.881686 6 log.go:172] (0xc001758210) Go away received I0330 21:21:28.882108 6 log.go:172] (0xc001758210) (0xc0028c2b40) Stream removed, broadcasting: 1 I0330 21:21:28.882131 6 log.go:172] (0xc001758210) (0xc002850140) Stream removed, broadcasting: 3 I0330 21:21:28.882142 6 log.go:172] (0xc001758210) (0xc002850280) Stream removed, broadcasting: 5 Mar 30 21:21:28.882: INFO: Found all expected endpoints: [netserver-0] Mar 30 21:21:28.885: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.208 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2626 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 21:21:28.885: INFO: >>> kubeConfig: /root/.kube/config I0330 21:21:28.919704 6 log.go:172] (0xc0009ac000) (0xc002850a00) Create stream I0330 21:21:28.919725 6 log.go:172] (0xc0009ac000) (0xc002850a00) Stream added, broadcasting: 1 I0330 21:21:28.922525 6 log.go:172] (0xc0009ac000) Reply frame received for 1 I0330 21:21:28.922569 6 log.go:172] (0xc0009ac000) (0xc0028c2e60) Create stream I0330 21:21:28.922585 6 log.go:172] (0xc0009ac000) (0xc0028c2e60) Stream added, broadcasting: 3 I0330 21:21:28.923499 6 log.go:172] (0xc0009ac000) Reply frame received for 3 I0330 21:21:28.923541 6 log.go:172] (0xc0009ac000) (0xc0020383c0) Create stream I0330 21:21:28.923556 6 log.go:172] (0xc0009ac000) (0xc0020383c0) Stream added, broadcasting: 5 I0330 21:21:28.924379 6 log.go:172] (0xc0009ac000) Reply frame received for 5 I0330 21:21:29.999810 6 log.go:172] (0xc0009ac000) Data frame received for 3 I0330 21:21:29.999868 6 log.go:172] (0xc0028c2e60) (3) Data frame handling I0330 21:21:29.999988 6 log.go:172] (0xc0028c2e60) (3) Data frame sent I0330 21:21:30.000029 6 log.go:172] (0xc0009ac000) Data frame received for 3 I0330 21:21:30.000067 6 log.go:172] (0xc0028c2e60) (3) Data frame handling I0330 21:21:30.000382 6 log.go:172] (0xc0009ac000) Data frame received for 5 I0330 21:21:30.000483 6 log.go:172] (0xc0020383c0) (5) Data frame handling I0330 21:21:30.007696 6 log.go:172] (0xc0009ac000) Data frame received for 1 I0330 21:21:30.007722 6 log.go:172] (0xc002850a00) (1) Data frame handling I0330 21:21:30.007740 6 log.go:172] (0xc002850a00) (1) Data frame sent I0330 21:21:30.009082 6 log.go:172] (0xc0009ac000) (0xc002850a00) Stream removed, broadcasting: 1 I0330 21:21:30.009268 6 log.go:172] (0xc0009ac000) (0xc002850a00) Stream removed, broadcasting: 1 I0330 21:21:30.009307 6 log.go:172] (0xc0009ac000) Go away received I0330 21:21:30.009341 6 log.go:172] (0xc0009ac000) (0xc0028c2e60) Stream removed, broadcasting: 3 I0330 21:21:30.009363 6 log.go:172] (0xc0009ac000) (0xc0020383c0) Stream removed, broadcasting: 5 Mar 30 21:21:30.009: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:21:30.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2626" for this suite. • [SLOW TEST:26.399 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":860,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:21:30.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Mar 30 21:21:30.674: INFO: Pod name wrapped-volume-race-6973d56b-80d0-4497-ac13-db091fabec62: Found 0 pods out of 5 Mar 30 21:21:35.960: INFO: Pod name wrapped-volume-race-6973d56b-80d0-4497-ac13-db091fabec62: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6973d56b-80d0-4497-ac13-db091fabec62 in namespace emptydir-wrapper-8211, will wait for the garbage collector to delete the pods Mar 30 21:21:48.052: INFO: Deleting ReplicationController wrapped-volume-race-6973d56b-80d0-4497-ac13-db091fabec62 took: 8.477348ms Mar 30 21:21:48.352: INFO: Terminating ReplicationController wrapped-volume-race-6973d56b-80d0-4497-ac13-db091fabec62 pods took: 300.262639ms STEP: Creating RC which spawns configmap-volume pods Mar 30 21:21:59.814: INFO: Pod name wrapped-volume-race-8a3823d0-4ac8-4d9e-bfec-02f40ec0855b: Found 0 pods out of 5 Mar 30 21:22:04.822: INFO: Pod name wrapped-volume-race-8a3823d0-4ac8-4d9e-bfec-02f40ec0855b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-8a3823d0-4ac8-4d9e-bfec-02f40ec0855b in namespace emptydir-wrapper-8211, will wait for the garbage collector to delete the pods Mar 30 21:22:16.913: INFO: Deleting ReplicationController wrapped-volume-race-8a3823d0-4ac8-4d9e-bfec-02f40ec0855b took: 7.344373ms Mar 30 21:22:17.314: INFO: Terminating ReplicationController wrapped-volume-race-8a3823d0-4ac8-4d9e-bfec-02f40ec0855b pods took: 400.2383ms STEP: Creating RC which spawns configmap-volume pods Mar 30 21:22:30.550: INFO: Pod name wrapped-volume-race-2556b13c-51f9-4c78-acdb-f274ca87f13f: Found 0 pods out of 5 Mar 30 21:22:35.558: INFO: Pod name wrapped-volume-race-2556b13c-51f9-4c78-acdb-f274ca87f13f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-2556b13c-51f9-4c78-acdb-f274ca87f13f in namespace emptydir-wrapper-8211, will wait for the garbage collector to delete the pods Mar 30 21:22:49.639: INFO: Deleting ReplicationController wrapped-volume-race-2556b13c-51f9-4c78-acdb-f274ca87f13f took: 7.662385ms Mar 30 21:22:50.039: INFO: Terminating ReplicationController wrapped-volume-race-2556b13c-51f9-4c78-acdb-f274ca87f13f pods took: 400.231829ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:23:00.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8211" for this suite. • [SLOW TEST:90.106 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":53,"skipped":864,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:23:00.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-dggv STEP: Creating a pod to test atomic-volume-subpath Mar 30 21:23:00.209: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-dggv" in namespace "subpath-1593" to be "success or failure" Mar 30 21:23:00.229: INFO: Pod "pod-subpath-test-secret-dggv": Phase="Pending", Reason="", readiness=false. Elapsed: 19.74084ms Mar 30 21:23:02.234: INFO: Pod "pod-subpath-test-secret-dggv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024226657s Mar 30 21:23:04.238: INFO: Pod "pod-subpath-test-secret-dggv": Phase="Running", Reason="", readiness=true. Elapsed: 4.028555692s Mar 30 21:23:06.243: INFO: Pod "pod-subpath-test-secret-dggv": Phase="Running", Reason="", readiness=true. Elapsed: 6.034166829s Mar 30 21:23:08.248: INFO: Pod "pod-subpath-test-secret-dggv": Phase="Running", Reason="", readiness=true. Elapsed: 8.038637759s Mar 30 21:23:10.258: INFO: Pod "pod-subpath-test-secret-dggv": Phase="Running", Reason="", readiness=true. Elapsed: 10.048192297s Mar 30 21:23:12.261: INFO: Pod "pod-subpath-test-secret-dggv": Phase="Running", Reason="", readiness=true. Elapsed: 12.051853204s Mar 30 21:23:14.265: INFO: Pod "pod-subpath-test-secret-dggv": Phase="Running", Reason="", readiness=true. Elapsed: 14.055991705s Mar 30 21:23:16.270: INFO: Pod "pod-subpath-test-secret-dggv": Phase="Running", Reason="", readiness=true. Elapsed: 16.060268011s Mar 30 21:23:18.274: INFO: Pod "pod-subpath-test-secret-dggv": Phase="Running", Reason="", readiness=true. Elapsed: 18.064452187s Mar 30 21:23:20.278: INFO: Pod "pod-subpath-test-secret-dggv": Phase="Running", Reason="", readiness=true. Elapsed: 20.068704205s Mar 30 21:23:22.281: INFO: Pod "pod-subpath-test-secret-dggv": Phase="Running", Reason="", readiness=true. Elapsed: 22.072012116s Mar 30 21:23:24.285: INFO: Pod "pod-subpath-test-secret-dggv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.076028846s STEP: Saw pod success Mar 30 21:23:24.285: INFO: Pod "pod-subpath-test-secret-dggv" satisfied condition "success or failure" Mar 30 21:23:24.288: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-dggv container test-container-subpath-secret-dggv: STEP: delete the pod Mar 30 21:23:24.323: INFO: Waiting for pod pod-subpath-test-secret-dggv to disappear Mar 30 21:23:24.328: INFO: Pod pod-subpath-test-secret-dggv no longer exists STEP: Deleting pod pod-subpath-test-secret-dggv Mar 30 21:23:24.328: INFO: Deleting pod "pod-subpath-test-secret-dggv" in namespace "subpath-1593" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:23:24.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1593" for this suite. • [SLOW TEST:24.214 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":54,"skipped":867,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:23:24.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 30 21:23:24.807: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 30 21:23:26.821: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200204, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200204, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200204, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200204, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 30 21:23:29.876: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:23:30.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2637" for this suite. STEP: Destroying namespace "webhook-2637-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.846 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":55,"skipped":882,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:23:30.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Mar 30 21:23:30.234: INFO: Created pod &Pod{ObjectMeta:{dns-481 dns-481 /api/v1/namespaces/dns-481/pods/dns-481 6e242cb8-b260-4403-87b7-8a3160edea63 4052392 0 2020-03-30 21:23:30 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-26dcq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-26dcq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-26dcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Mar 30 21:23:34.242: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-481 PodName:dns-481 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 21:23:34.242: INFO: >>> kubeConfig: /root/.kube/config I0330 21:23:34.278095 6 log.go:172] (0xc003d082c0) (0xc00167c280) Create stream I0330 21:23:34.278131 6 log.go:172] (0xc003d082c0) (0xc00167c280) Stream added, broadcasting: 1 I0330 21:23:34.280441 6 log.go:172] (0xc003d082c0) Reply frame received for 1 I0330 21:23:34.280492 6 log.go:172] (0xc003d082c0) (0xc001ac2000) Create stream I0330 21:23:34.280515 6 log.go:172] (0xc003d082c0) (0xc001ac2000) Stream added, broadcasting: 3 I0330 21:23:34.281985 6 log.go:172] (0xc003d082c0) Reply frame received for 3 I0330 21:23:34.282031 6 log.go:172] (0xc003d082c0) (0xc00167c640) Create stream I0330 21:23:34.282053 6 log.go:172] (0xc003d082c0) (0xc00167c640) Stream added, broadcasting: 5 I0330 21:23:34.283115 6 log.go:172] (0xc003d082c0) Reply frame received for 5 I0330 21:23:34.366180 6 log.go:172] (0xc003d082c0) Data frame received for 3 I0330 21:23:34.366208 6 log.go:172] (0xc001ac2000) (3) Data frame handling I0330 21:23:34.366225 6 log.go:172] (0xc001ac2000) (3) Data frame sent I0330 21:23:34.367484 6 log.go:172] (0xc003d082c0) Data frame received for 5 I0330 21:23:34.367549 6 log.go:172] (0xc00167c640) (5) Data frame handling I0330 21:23:34.367577 6 log.go:172] (0xc003d082c0) Data frame received for 3 I0330 21:23:34.367584 6 log.go:172] (0xc001ac2000) (3) Data frame handling I0330 21:23:34.369474 6 log.go:172] (0xc003d082c0) Data frame received for 1 I0330 21:23:34.369500 6 log.go:172] (0xc00167c280) (1) Data frame handling I0330 21:23:34.369517 6 log.go:172] (0xc00167c280) (1) Data frame sent I0330 21:23:34.369533 6 log.go:172] (0xc003d082c0) (0xc00167c280) Stream removed, broadcasting: 1 I0330 21:23:34.369546 6 log.go:172] (0xc003d082c0) Go away received I0330 21:23:34.369637 6 log.go:172] (0xc003d082c0) (0xc00167c280) Stream removed, broadcasting: 1 I0330 21:23:34.369666 6 log.go:172] (0xc003d082c0) (0xc001ac2000) Stream removed, broadcasting: 3 I0330 21:23:34.369682 6 log.go:172] (0xc003d082c0) (0xc00167c640) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Mar 30 21:23:34.369: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-481 PodName:dns-481 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 21:23:34.369: INFO: >>> kubeConfig: /root/.kube/config I0330 21:23:34.401075 6 log.go:172] (0xc001758370) (0xc001fe2460) Create stream I0330 21:23:34.401102 6 log.go:172] (0xc001758370) (0xc001fe2460) Stream added, broadcasting: 1 I0330 21:23:34.403057 6 log.go:172] (0xc001758370) Reply frame received for 1 I0330 21:23:34.403116 6 log.go:172] (0xc001758370) (0xc0027e1cc0) Create stream I0330 21:23:34.403134 6 log.go:172] (0xc001758370) (0xc0027e1cc0) Stream added, broadcasting: 3 I0330 21:23:34.404546 6 log.go:172] (0xc001758370) Reply frame received for 3 I0330 21:23:34.404604 6 log.go:172] (0xc001758370) (0xc0027e1d60) Create stream I0330 21:23:34.404630 6 log.go:172] (0xc001758370) (0xc0027e1d60) Stream added, broadcasting: 5 I0330 21:23:34.405993 6 log.go:172] (0xc001758370) Reply frame received for 5 I0330 21:23:34.465989 6 log.go:172] (0xc001758370) Data frame received for 3 I0330 21:23:34.466021 6 log.go:172] (0xc0027e1cc0) (3) Data frame handling I0330 21:23:34.466048 6 log.go:172] (0xc0027e1cc0) (3) Data frame sent I0330 21:23:34.466576 6 log.go:172] (0xc001758370) Data frame received for 3 I0330 21:23:34.466608 6 log.go:172] (0xc0027e1cc0) (3) Data frame handling I0330 21:23:34.466636 6 log.go:172] (0xc001758370) Data frame received for 5 I0330 21:23:34.466651 6 log.go:172] (0xc0027e1d60) (5) Data frame handling I0330 21:23:34.468177 6 log.go:172] (0xc001758370) Data frame received for 1 I0330 21:23:34.468209 6 log.go:172] (0xc001fe2460) (1) Data frame handling I0330 21:23:34.468233 6 log.go:172] (0xc001fe2460) (1) Data frame sent I0330 21:23:34.468255 6 log.go:172] (0xc001758370) (0xc001fe2460) Stream removed, broadcasting: 1 I0330 21:23:34.468341 6 log.go:172] (0xc001758370) (0xc001fe2460) Stream removed, broadcasting: 1 I0330 21:23:34.468353 6 log.go:172] (0xc001758370) (0xc0027e1cc0) Stream removed, broadcasting: 3 I0330 21:23:34.468531 6 log.go:172] (0xc001758370) Go away received I0330 21:23:34.468559 6 log.go:172] (0xc001758370) (0xc0027e1d60) Stream removed, broadcasting: 5 Mar 30 21:23:34.468: INFO: Deleting pod dns-481... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:23:34.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-481" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":56,"skipped":891,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:23:34.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 30 21:23:37.688: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:23:37.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5046" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":900,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:23:37.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:23:37.870: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 30 21:23:40.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1008 create -f -' Mar 30 21:23:43.793: INFO: stderr: "" Mar 30 21:23:43.794: INFO: stdout: "e2e-test-crd-publish-openapi-6138-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 30 21:23:43.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1008 delete e2e-test-crd-publish-openapi-6138-crds test-cr' Mar 30 21:23:43.907: INFO: stderr: "" Mar 30 21:23:43.907: INFO: stdout: "e2e-test-crd-publish-openapi-6138-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Mar 30 21:23:43.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1008 apply -f -' Mar 30 21:23:44.150: INFO: stderr: "" Mar 30 21:23:44.150: INFO: stdout: "e2e-test-crd-publish-openapi-6138-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Mar 30 21:23:44.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1008 delete e2e-test-crd-publish-openapi-6138-crds test-cr' Mar 30 21:23:44.255: INFO: stderr: "" Mar 30 21:23:44.255: INFO: stdout: "e2e-test-crd-publish-openapi-6138-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 30 21:23:44.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6138-crds' Mar 30 21:23:44.480: INFO: stderr: "" Mar 30 21:23:44.480: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6138-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:23:47.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1008" for this suite. • [SLOW TEST:9.546 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":58,"skipped":903,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:23:47.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1861 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 30 21:23:47.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9445' Mar 30 21:23:47.503: INFO: stderr: "" Mar 30 21:23:47.503: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1866 Mar 30 21:23:47.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9445' Mar 30 21:23:59.229: INFO: stderr: "" Mar 30 21:23:59.229: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:23:59.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9445" for this suite. • [SLOW TEST:11.862 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1857 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":59,"skipped":932,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:23:59.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:23:59.274: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:24:03.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7166" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":973,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:24:03.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 30 21:24:03.919: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 30 21:24:05.928: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200243, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200243, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200243, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200243, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 30 21:24:08.963: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:24:09.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2750" for this suite. STEP: Destroying namespace "webhook-2750-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.863 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":61,"skipped":1014,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:24:09.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7161.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7161.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7161.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7161.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7161.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7161.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7161.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7161.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7161.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7161.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 30 21:24:15.331: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:15.336: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:15.339: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:15.343: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:15.401: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:15.406: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:15.410: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:15.413: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:15.418: INFO: Lookups using dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7161.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7161.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local jessie_udp@dns-test-service-2.dns-7161.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7161.svc.cluster.local] Mar 30 21:24:20.423: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:20.427: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:20.431: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:20.434: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:20.444: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:20.448: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:20.451: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:20.455: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:20.462: INFO: Lookups using dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7161.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7161.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local jessie_udp@dns-test-service-2.dns-7161.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7161.svc.cluster.local] Mar 30 21:24:25.422: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:25.425: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:25.429: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:25.432: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:25.442: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:25.445: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:25.448: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:25.451: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:25.457: INFO: Lookups using dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7161.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7161.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local jessie_udp@dns-test-service-2.dns-7161.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7161.svc.cluster.local] Mar 30 21:24:30.422: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:30.426: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:30.430: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:30.434: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:30.451: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:30.453: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:30.456: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:30.459: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:30.464: INFO: Lookups using dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7161.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7161.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local jessie_udp@dns-test-service-2.dns-7161.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7161.svc.cluster.local] Mar 30 21:24:35.423: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:35.427: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:35.431: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:35.434: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:35.444: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:35.447: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:35.450: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:35.452: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:35.458: INFO: Lookups using dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7161.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7161.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local jessie_udp@dns-test-service-2.dns-7161.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7161.svc.cluster.local] Mar 30 21:24:40.422: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:40.425: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:40.428: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:40.431: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:40.439: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:40.447: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:40.449: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:40.452: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7161.svc.cluster.local from pod dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a: the server could not find the requested resource (get pods dns-test-84f58781-d7d1-4024-9c34-2458e747311a) Mar 30 21:24:40.457: INFO: Lookups using dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7161.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7161.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7161.svc.cluster.local jessie_udp@dns-test-service-2.dns-7161.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7161.svc.cluster.local] Mar 30 21:24:45.458: INFO: DNS probes using dns-7161/dns-test-84f58781-d7d1-4024-9c34-2458e747311a succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:24:45.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7161" for this suite. • [SLOW TEST:36.759 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":62,"skipped":1027,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:24:45.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 30 21:24:46.087: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f37e9c69-0fd4-4cbe-b20a-6da9191873c0" in namespace "downward-api-3676" to be "success or failure" Mar 30 21:24:46.110: INFO: Pod "downwardapi-volume-f37e9c69-0fd4-4cbe-b20a-6da9191873c0": Phase="Pending", Reason="", readiness=false. Elapsed: 23.05408ms Mar 30 21:24:48.254: INFO: Pod "downwardapi-volume-f37e9c69-0fd4-4cbe-b20a-6da9191873c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.167699472s Mar 30 21:24:50.259: INFO: Pod "downwardapi-volume-f37e9c69-0fd4-4cbe-b20a-6da9191873c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1719729s STEP: Saw pod success Mar 30 21:24:50.259: INFO: Pod "downwardapi-volume-f37e9c69-0fd4-4cbe-b20a-6da9191873c0" satisfied condition "success or failure" Mar 30 21:24:50.262: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-f37e9c69-0fd4-4cbe-b20a-6da9191873c0 container client-container: STEP: delete the pod Mar 30 21:24:50.298: INFO: Waiting for pod downwardapi-volume-f37e9c69-0fd4-4cbe-b20a-6da9191873c0 to disappear Mar 30 21:24:50.312: INFO: Pod downwardapi-volume-f37e9c69-0fd4-4cbe-b20a-6da9191873c0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:24:50.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3676" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1041,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:24:50.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 30 21:24:50.428: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2c377cd3-1bf1-489e-9b1d-f784a7806c2c" in namespace "downward-api-398" to be "success or failure" Mar 30 21:24:50.486: INFO: Pod "downwardapi-volume-2c377cd3-1bf1-489e-9b1d-f784a7806c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 58.089746ms Mar 30 21:24:52.490: INFO: Pod "downwardapi-volume-2c377cd3-1bf1-489e-9b1d-f784a7806c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062651901s Mar 30 21:24:54.494: INFO: Pod "downwardapi-volume-2c377cd3-1bf1-489e-9b1d-f784a7806c2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066334823s STEP: Saw pod success Mar 30 21:24:54.494: INFO: Pod "downwardapi-volume-2c377cd3-1bf1-489e-9b1d-f784a7806c2c" satisfied condition "success or failure" Mar 30 21:24:54.497: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2c377cd3-1bf1-489e-9b1d-f784a7806c2c container client-container: STEP: delete the pod Mar 30 21:24:54.528: INFO: Waiting for pod downwardapi-volume-2c377cd3-1bf1-489e-9b1d-f784a7806c2c to disappear Mar 30 21:24:54.533: INFO: Pod downwardapi-volume-2c377cd3-1bf1-489e-9b1d-f784a7806c2c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:24:54.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-398" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1043,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:24:54.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Mar 30 21:24:58.656: INFO: Pod pod-hostip-b478ff86-8277-459d-b838-05b50ddff147 has hostIP: 172.17.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:24:58.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9149" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1113,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:24:58.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-c8dl STEP: Creating a pod to test atomic-volume-subpath Mar 30 21:24:58.751: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-c8dl" in namespace "subpath-7064" to be "success or failure" Mar 30 21:24:58.755: INFO: Pod "pod-subpath-test-configmap-c8dl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14155ms Mar 30 21:25:00.759: INFO: Pod "pod-subpath-test-configmap-c8dl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008095234s Mar 30 21:25:02.764: INFO: Pod "pod-subpath-test-configmap-c8dl": Phase="Running", Reason="", readiness=true. Elapsed: 4.012279217s Mar 30 21:25:04.768: INFO: Pod "pod-subpath-test-configmap-c8dl": Phase="Running", Reason="", readiness=true. Elapsed: 6.016310919s Mar 30 21:25:06.771: INFO: Pod "pod-subpath-test-configmap-c8dl": Phase="Running", Reason="", readiness=true. Elapsed: 8.020134069s Mar 30 21:25:08.775: INFO: Pod "pod-subpath-test-configmap-c8dl": Phase="Running", Reason="", readiness=true. Elapsed: 10.023658735s Mar 30 21:25:10.778: INFO: Pod "pod-subpath-test-configmap-c8dl": Phase="Running", Reason="", readiness=true. Elapsed: 12.026713053s Mar 30 21:25:12.782: INFO: Pod "pod-subpath-test-configmap-c8dl": Phase="Running", Reason="", readiness=true. Elapsed: 14.03079523s Mar 30 21:25:14.786: INFO: Pod "pod-subpath-test-configmap-c8dl": Phase="Running", Reason="", readiness=true. Elapsed: 16.034410155s Mar 30 21:25:16.790: INFO: Pod "pod-subpath-test-configmap-c8dl": Phase="Running", Reason="", readiness=true. Elapsed: 18.038703853s Mar 30 21:25:18.794: INFO: Pod "pod-subpath-test-configmap-c8dl": Phase="Running", Reason="", readiness=true. Elapsed: 20.04305665s Mar 30 21:25:20.798: INFO: Pod "pod-subpath-test-configmap-c8dl": Phase="Running", Reason="", readiness=true. Elapsed: 22.047101122s Mar 30 21:25:22.805: INFO: Pod "pod-subpath-test-configmap-c8dl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054076613s STEP: Saw pod success Mar 30 21:25:22.805: INFO: Pod "pod-subpath-test-configmap-c8dl" satisfied condition "success or failure" Mar 30 21:25:22.807: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-c8dl container test-container-subpath-configmap-c8dl: STEP: delete the pod Mar 30 21:25:22.842: INFO: Waiting for pod pod-subpath-test-configmap-c8dl to disappear Mar 30 21:25:22.901: INFO: Pod pod-subpath-test-configmap-c8dl no longer exists STEP: Deleting pod pod-subpath-test-configmap-c8dl Mar 30 21:25:22.901: INFO: Deleting pod "pod-subpath-test-configmap-c8dl" in namespace "subpath-7064" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:25:22.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7064" for this suite. • [SLOW TEST:24.268 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":66,"skipped":1270,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:25:22.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:25:23.101: INFO: Create a RollingUpdate DaemonSet Mar 30 21:25:23.105: INFO: Check that daemon pods launch on every node of the cluster Mar 30 21:25:23.116: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 21:25:23.120: INFO: Number of nodes with available pods: 0 Mar 30 21:25:23.120: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:25:24.126: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 21:25:24.130: INFO: Number of nodes with available pods: 0 Mar 30 21:25:24.130: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:25:25.124: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 21:25:25.127: INFO: Number of nodes with available pods: 0 Mar 30 21:25:25.127: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:25:26.128: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 21:25:26.131: INFO: Number of nodes with available pods: 0 Mar 30 21:25:26.131: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:25:27.167: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 21:25:27.175: INFO: Number of nodes with available pods: 2 Mar 30 21:25:27.175: INFO: Number of running nodes: 2, number of available pods: 2 Mar 30 21:25:27.175: INFO: Update the DaemonSet to trigger a rollout Mar 30 21:25:27.180: INFO: Updating DaemonSet daemon-set Mar 30 21:25:40.219: INFO: Roll back the DaemonSet before rollout is complete Mar 30 21:25:40.225: INFO: Updating DaemonSet daemon-set Mar 30 21:25:40.225: INFO: Make sure DaemonSet rollback is complete Mar 30 21:25:40.310: INFO: Wrong image for pod: daemon-set-69rkl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 30 21:25:40.310: INFO: Pod daemon-set-69rkl is not available Mar 30 21:25:40.313: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 21:25:41.318: INFO: Wrong image for pod: daemon-set-69rkl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 30 21:25:41.318: INFO: Pod daemon-set-69rkl is not available Mar 30 21:25:41.322: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 21:25:42.317: INFO: Wrong image for pod: daemon-set-69rkl. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Mar 30 21:25:42.318: INFO: Pod daemon-set-69rkl is not available Mar 30 21:25:42.321: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 21:25:43.317: INFO: Pod daemon-set-wcmsz is not available Mar 30 21:25:43.320: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6498, will wait for the garbage collector to delete the pods Mar 30 21:25:43.383: INFO: Deleting DaemonSet.extensions daemon-set took: 5.358663ms Mar 30 21:25:43.783: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.268317ms Mar 30 21:25:46.087: INFO: Number of nodes with available pods: 0 Mar 30 21:25:46.087: INFO: Number of running nodes: 0, number of available pods: 0 Mar 30 21:25:46.090: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6498/daemonsets","resourceVersion":"4053217"},"items":null} Mar 30 21:25:46.093: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6498/pods","resourceVersion":"4053217"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:25:46.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6498" for this suite. • [SLOW TEST:23.176 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":67,"skipped":1297,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:25:46.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Mar 30 21:25:56.288: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4994 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 21:25:56.288: INFO: >>> kubeConfig: /root/.kube/config I0330 21:25:56.322827 6 log.go:172] (0xc000a24000) (0xc00167cd20) Create stream I0330 21:25:56.322858 6 log.go:172] (0xc000a24000) (0xc00167cd20) Stream added, broadcasting: 1 I0330 21:25:56.325103 6 log.go:172] (0xc000a24000) Reply frame received for 1 I0330 21:25:56.325246 6 log.go:172] (0xc000a24000) (0xc00167cdc0) Create stream I0330 21:25:56.325260 6 log.go:172] (0xc000a24000) (0xc00167cdc0) Stream added, broadcasting: 3 I0330 21:25:56.326429 6 log.go:172] (0xc000a24000) Reply frame received for 3 I0330 21:25:56.326468 6 log.go:172] (0xc000a24000) (0xc001f5d360) Create stream I0330 21:25:56.326481 6 log.go:172] (0xc000a24000) (0xc001f5d360) Stream added, broadcasting: 5 I0330 21:25:56.327355 6 log.go:172] (0xc000a24000) Reply frame received for 5 I0330 21:25:56.414276 6 log.go:172] (0xc000a24000) Data frame received for 5 I0330 21:25:56.414307 6 log.go:172] (0xc001f5d360) (5) Data frame handling I0330 21:25:56.414325 6 log.go:172] (0xc000a24000) Data frame received for 3 I0330 21:25:56.414331 6 log.go:172] (0xc00167cdc0) (3) Data frame handling I0330 21:25:56.414337 6 log.go:172] (0xc00167cdc0) (3) Data frame sent I0330 21:25:56.414462 6 log.go:172] (0xc000a24000) Data frame received for 3 I0330 21:25:56.414503 6 log.go:172] (0xc00167cdc0) (3) Data frame handling I0330 21:25:56.416176 6 log.go:172] (0xc000a24000) Data frame received for 1 I0330 21:25:56.416189 6 log.go:172] (0xc00167cd20) (1) Data frame handling I0330 21:25:56.416194 6 log.go:172] (0xc00167cd20) (1) Data frame sent I0330 21:25:56.416202 6 log.go:172] (0xc000a24000) (0xc00167cd20) Stream removed, broadcasting: 1 I0330 21:25:56.416237 6 log.go:172] (0xc000a24000) Go away received I0330 21:25:56.416286 6 log.go:172] (0xc000a24000) (0xc00167cd20) Stream removed, broadcasting: 1 I0330 21:25:56.416300 6 log.go:172] (0xc000a24000) (0xc00167cdc0) Stream removed, broadcasting: 3 I0330 21:25:56.416310 6 log.go:172] (0xc000a24000) (0xc001f5d360) Stream removed, broadcasting: 5 Mar 30 21:25:56.416: INFO: Exec stderr: "" Mar 30 21:25:56.416: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4994 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 21:25:56.416: INFO: >>> kubeConfig: /root/.kube/config I0330 21:25:56.450317 6 log.go:172] (0xc000906420) (0xc002374820) Create stream I0330 21:25:56.450345 6 log.go:172] (0xc000906420) (0xc002374820) Stream added, broadcasting: 1 I0330 21:25:56.458314 6 log.go:172] (0xc000906420) Reply frame received for 1 I0330 21:25:56.458355 6 log.go:172] (0xc000906420) (0xc00167cf00) Create stream I0330 21:25:56.458371 6 log.go:172] (0xc000906420) (0xc00167cf00) Stream added, broadcasting: 3 I0330 21:25:56.459158 6 log.go:172] (0xc000906420) Reply frame received for 3 I0330 21:25:56.459182 6 log.go:172] (0xc000906420) (0xc001f5d4a0) Create stream I0330 21:25:56.459188 6 log.go:172] (0xc000906420) (0xc001f5d4a0) Stream added, broadcasting: 5 I0330 21:25:56.459834 6 log.go:172] (0xc000906420) Reply frame received for 5 I0330 21:25:56.505343 6 log.go:172] (0xc000906420) Data frame received for 5 I0330 21:25:56.505384 6 log.go:172] (0xc001f5d4a0) (5) Data frame handling I0330 21:25:56.505413 6 log.go:172] (0xc000906420) Data frame received for 3 I0330 21:25:56.505427 6 log.go:172] (0xc00167cf00) (3) Data frame handling I0330 21:25:56.505443 6 log.go:172] (0xc00167cf00) (3) Data frame sent I0330 21:25:56.505456 6 log.go:172] (0xc000906420) Data frame received for 3 I0330 21:25:56.505468 6 log.go:172] (0xc00167cf00) (3) Data frame handling I0330 21:25:56.506645 6 log.go:172] (0xc000906420) Data frame received for 1 I0330 21:25:56.506658 6 log.go:172] (0xc002374820) (1) Data frame handling I0330 21:25:56.506665 6 log.go:172] (0xc002374820) (1) Data frame sent I0330 21:25:56.506673 6 log.go:172] (0xc000906420) (0xc002374820) Stream removed, broadcasting: 1 I0330 21:25:56.506723 6 log.go:172] (0xc000906420) Go away received I0330 21:25:56.506752 6 log.go:172] (0xc000906420) (0xc002374820) Stream removed, broadcasting: 1 I0330 21:25:56.506767 6 log.go:172] (0xc000906420) (0xc00167cf00) Stream removed, broadcasting: 3 I0330 21:25:56.506790 6 log.go:172] (0xc000906420) (0xc001f5d4a0) Stream removed, broadcasting: 5 Mar 30 21:25:56.506: INFO: Exec stderr: "" Mar 30 21:25:56.506: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4994 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 21:25:56.506: INFO: >>> kubeConfig: /root/.kube/config I0330 21:25:56.530822 6 log.go:172] (0xc0009069a0) (0xc002374960) Create stream I0330 21:25:56.530859 6 log.go:172] (0xc0009069a0) (0xc002374960) Stream added, broadcasting: 1 I0330 21:25:56.532564 6 log.go:172] (0xc0009069a0) Reply frame received for 1 I0330 21:25:56.532592 6 log.go:172] (0xc0009069a0) (0xc002374aa0) Create stream I0330 21:25:56.532602 6 log.go:172] (0xc0009069a0) (0xc002374aa0) Stream added, broadcasting: 3 I0330 21:25:56.533710 6 log.go:172] (0xc0009069a0) Reply frame received for 3 I0330 21:25:56.533745 6 log.go:172] (0xc0009069a0) (0xc001f5d540) Create stream I0330 21:25:56.533761 6 log.go:172] (0xc0009069a0) (0xc001f5d540) Stream added, broadcasting: 5 I0330 21:25:56.534573 6 log.go:172] (0xc0009069a0) Reply frame received for 5 I0330 21:25:56.592835 6 log.go:172] (0xc0009069a0) Data frame received for 3 I0330 21:25:56.592860 6 log.go:172] (0xc002374aa0) (3) Data frame handling I0330 21:25:56.592874 6 log.go:172] (0xc002374aa0) (3) Data frame sent I0330 21:25:56.592884 6 log.go:172] (0xc0009069a0) Data frame received for 3 I0330 21:25:56.592890 6 log.go:172] (0xc002374aa0) (3) Data frame handling I0330 21:25:56.592911 6 log.go:172] (0xc0009069a0) Data frame received for 5 I0330 21:25:56.592922 6 log.go:172] (0xc001f5d540) (5) Data frame handling I0330 21:25:56.594689 6 log.go:172] (0xc0009069a0) Data frame received for 1 I0330 21:25:56.594702 6 log.go:172] (0xc002374960) (1) Data frame handling I0330 21:25:56.594707 6 log.go:172] (0xc002374960) (1) Data frame sent I0330 21:25:56.594716 6 log.go:172] (0xc0009069a0) (0xc002374960) Stream removed, broadcasting: 1 I0330 21:25:56.594787 6 log.go:172] (0xc0009069a0) Go away received I0330 21:25:56.594833 6 log.go:172] (0xc0009069a0) (0xc002374960) Stream removed, broadcasting: 1 I0330 21:25:56.594856 6 log.go:172] (0xc0009069a0) (0xc002374aa0) Stream removed, broadcasting: 3 I0330 21:25:56.594871 6 log.go:172] (0xc0009069a0) (0xc001f5d540) Stream removed, broadcasting: 5 Mar 30 21:25:56.594: INFO: Exec stderr: "" Mar 30 21:25:56.594: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4994 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 21:25:56.594: INFO: >>> kubeConfig: /root/.kube/config I0330 21:25:56.660772 6 log.go:172] (0xc000a246e0) (0xc00167d180) Create stream I0330 21:25:56.660838 6 log.go:172] (0xc000a246e0) (0xc00167d180) Stream added, broadcasting: 1 I0330 21:25:56.663529 6 log.go:172] (0xc000a246e0) Reply frame received for 1 I0330 21:25:56.663574 6 log.go:172] (0xc000a246e0) (0xc001f5d5e0) Create stream I0330 21:25:56.663592 6 log.go:172] (0xc000a246e0) (0xc001f5d5e0) Stream added, broadcasting: 3 I0330 21:25:56.664667 6 log.go:172] (0xc000a246e0) Reply frame received for 3 I0330 21:25:56.664716 6 log.go:172] (0xc000a246e0) (0xc002374be0) Create stream I0330 21:25:56.664734 6 log.go:172] (0xc000a246e0) (0xc002374be0) Stream added, broadcasting: 5 I0330 21:25:56.665965 6 log.go:172] (0xc000a246e0) Reply frame received for 5 I0330 21:25:56.742822 6 log.go:172] (0xc000a246e0) Data frame received for 3 I0330 21:25:56.742854 6 log.go:172] (0xc001f5d5e0) (3) Data frame handling I0330 21:25:56.742870 6 log.go:172] (0xc001f5d5e0) (3) Data frame sent I0330 21:25:56.742909 6 log.go:172] (0xc000a246e0) Data frame received for 5 I0330 21:25:56.742964 6 log.go:172] (0xc002374be0) (5) Data frame handling I0330 21:25:56.743001 6 log.go:172] (0xc000a246e0) Data frame received for 3 I0330 21:25:56.743023 6 log.go:172] (0xc001f5d5e0) (3) Data frame handling I0330 21:25:56.744565 6 log.go:172] (0xc000a246e0) Data frame received for 1 I0330 21:25:56.744597 6 log.go:172] (0xc00167d180) (1) Data frame handling I0330 21:25:56.744646 6 log.go:172] (0xc00167d180) (1) Data frame sent I0330 21:25:56.744736 6 log.go:172] (0xc000a246e0) (0xc00167d180) Stream removed, broadcasting: 1 I0330 21:25:56.744776 6 log.go:172] (0xc000a246e0) Go away received I0330 21:25:56.744910 6 log.go:172] (0xc000a246e0) (0xc00167d180) Stream removed, broadcasting: 1 I0330 21:25:56.744945 6 log.go:172] (0xc000a246e0) (0xc001f5d5e0) Stream removed, broadcasting: 3 I0330 21:25:56.744968 6 log.go:172] (0xc000a246e0) (0xc002374be0) Stream removed, broadcasting: 5 Mar 30 21:25:56.744: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Mar 30 21:25:56.745: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4994 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 21:25:56.745: INFO: >>> kubeConfig: /root/.kube/config I0330 21:25:56.777257 6 log.go:172] (0xc0026ec000) (0xc0027e0640) Create stream I0330 21:25:56.777287 6 log.go:172] (0xc0026ec000) (0xc0027e0640) Stream added, broadcasting: 1 I0330 21:25:56.779448 6 log.go:172] (0xc0026ec000) Reply frame received for 1 I0330 21:25:56.779471 6 log.go:172] (0xc0026ec000) (0xc001a435e0) Create stream I0330 21:25:56.779479 6 log.go:172] (0xc0026ec000) (0xc001a435e0) Stream added, broadcasting: 3 I0330 21:25:56.780166 6 log.go:172] (0xc0026ec000) Reply frame received for 3 I0330 21:25:56.780199 6 log.go:172] (0xc0026ec000) (0xc001a43720) Create stream I0330 21:25:56.780213 6 log.go:172] (0xc0026ec000) (0xc001a43720) Stream added, broadcasting: 5 I0330 21:25:56.780904 6 log.go:172] (0xc0026ec000) Reply frame received for 5 I0330 21:25:56.853642 6 log.go:172] (0xc0026ec000) Data frame received for 3 I0330 21:25:56.853677 6 log.go:172] (0xc001a435e0) (3) Data frame handling I0330 21:25:56.853702 6 log.go:172] (0xc001a435e0) (3) Data frame sent I0330 21:25:56.853852 6 log.go:172] (0xc0026ec000) Data frame received for 3 I0330 21:25:56.853882 6 log.go:172] (0xc001a435e0) (3) Data frame handling I0330 21:25:56.853915 6 log.go:172] (0xc0026ec000) Data frame received for 5 I0330 21:25:56.853935 6 log.go:172] (0xc001a43720) (5) Data frame handling I0330 21:25:56.855731 6 log.go:172] (0xc0026ec000) Data frame received for 1 I0330 21:25:56.855760 6 log.go:172] (0xc0027e0640) (1) Data frame handling I0330 21:25:56.855792 6 log.go:172] (0xc0027e0640) (1) Data frame sent I0330 21:25:56.855809 6 log.go:172] (0xc0026ec000) (0xc0027e0640) Stream removed, broadcasting: 1 I0330 21:25:56.855905 6 log.go:172] (0xc0026ec000) Go away received I0330 21:25:56.855996 6 log.go:172] (0xc0026ec000) (0xc0027e0640) Stream removed, broadcasting: 1 I0330 21:25:56.856022 6 log.go:172] (0xc0026ec000) (0xc001a435e0) Stream removed, broadcasting: 3 I0330 21:25:56.856039 6 log.go:172] (0xc0026ec000) (0xc001a43720) Stream removed, broadcasting: 5 Mar 30 21:25:56.856: INFO: Exec stderr: "" Mar 30 21:25:56.856: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4994 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 21:25:56.856: INFO: >>> kubeConfig: /root/.kube/config I0330 21:25:56.890653 6 log.go:172] (0xc001758580) (0xc001a43a40) Create stream I0330 21:25:56.890678 6 log.go:172] (0xc001758580) (0xc001a43a40) Stream added, broadcasting: 1 I0330 21:25:56.893035 6 log.go:172] (0xc001758580) Reply frame received for 1 I0330 21:25:56.893089 6 log.go:172] (0xc001758580) (0xc00167d360) Create stream I0330 21:25:56.893270 6 log.go:172] (0xc001758580) (0xc00167d360) Stream added, broadcasting: 3 I0330 21:25:56.894122 6 log.go:172] (0xc001758580) Reply frame received for 3 I0330 21:25:56.894175 6 log.go:172] (0xc001758580) (0xc0027e06e0) Create stream I0330 21:25:56.894186 6 log.go:172] (0xc001758580) (0xc0027e06e0) Stream added, broadcasting: 5 I0330 21:25:56.895114 6 log.go:172] (0xc001758580) Reply frame received for 5 I0330 21:25:56.949582 6 log.go:172] (0xc001758580) Data frame received for 5 I0330 21:25:56.949623 6 log.go:172] (0xc0027e06e0) (5) Data frame handling I0330 21:25:56.949650 6 log.go:172] (0xc001758580) Data frame received for 3 I0330 21:25:56.949668 6 log.go:172] (0xc00167d360) (3) Data frame handling I0330 21:25:56.949684 6 log.go:172] (0xc00167d360) (3) Data frame sent I0330 21:25:56.949699 6 log.go:172] (0xc001758580) Data frame received for 3 I0330 21:25:56.949712 6 log.go:172] (0xc00167d360) (3) Data frame handling I0330 21:25:56.951311 6 log.go:172] (0xc001758580) Data frame received for 1 I0330 21:25:56.951347 6 log.go:172] (0xc001a43a40) (1) Data frame handling I0330 21:25:56.951371 6 log.go:172] (0xc001a43a40) (1) Data frame sent I0330 21:25:56.951485 6 log.go:172] (0xc001758580) (0xc001a43a40) Stream removed, broadcasting: 1 I0330 21:25:56.951547 6 log.go:172] (0xc001758580) Go away received I0330 21:25:56.951640 6 log.go:172] (0xc001758580) (0xc001a43a40) Stream removed, broadcasting: 1 I0330 21:25:56.951662 6 log.go:172] (0xc001758580) (0xc00167d360) Stream removed, broadcasting: 3 I0330 21:25:56.951672 6 log.go:172] (0xc001758580) (0xc0027e06e0) Stream removed, broadcasting: 5 Mar 30 21:25:56.951: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Mar 30 21:25:56.951: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4994 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 21:25:56.951: INFO: >>> kubeConfig: /root/.kube/config I0330 21:25:56.990592 6 log.go:172] (0xc001758bb0) (0xc001a43ea0) Create stream I0330 21:25:56.990619 6 log.go:172] (0xc001758bb0) (0xc001a43ea0) Stream added, broadcasting: 1 I0330 21:25:56.992829 6 log.go:172] (0xc001758bb0) Reply frame received for 1 I0330 21:25:56.992872 6 log.go:172] (0xc001758bb0) (0xc001f5d720) Create stream I0330 21:25:56.992881 6 log.go:172] (0xc001758bb0) (0xc001f5d720) Stream added, broadcasting: 3 I0330 21:25:56.994005 6 log.go:172] (0xc001758bb0) Reply frame received for 3 I0330 21:25:56.994053 6 log.go:172] (0xc001758bb0) (0xc0027e0780) Create stream I0330 21:25:56.994078 6 log.go:172] (0xc001758bb0) (0xc0027e0780) Stream added, broadcasting: 5 I0330 21:25:56.995022 6 log.go:172] (0xc001758bb0) Reply frame received for 5 I0330 21:25:57.068974 6 log.go:172] (0xc001758bb0) Data frame received for 5 I0330 21:25:57.069025 6 log.go:172] (0xc0027e0780) (5) Data frame handling I0330 21:25:57.069372 6 log.go:172] (0xc001758bb0) Data frame received for 3 I0330 21:25:57.069406 6 log.go:172] (0xc001f5d720) (3) Data frame handling I0330 21:25:57.069443 6 log.go:172] (0xc001f5d720) (3) Data frame sent I0330 21:25:57.069459 6 log.go:172] (0xc001758bb0) Data frame received for 3 I0330 21:25:57.069469 6 log.go:172] (0xc001f5d720) (3) Data frame handling I0330 21:25:57.071185 6 log.go:172] (0xc001758bb0) Data frame received for 1 I0330 21:25:57.071220 6 log.go:172] (0xc001a43ea0) (1) Data frame handling I0330 21:25:57.071250 6 log.go:172] (0xc001a43ea0) (1) Data frame sent I0330 21:25:57.071277 6 log.go:172] (0xc001758bb0) (0xc001a43ea0) Stream removed, broadcasting: 1 I0330 21:25:57.071306 6 log.go:172] (0xc001758bb0) Go away received I0330 21:25:57.071582 6 log.go:172] (0xc001758bb0) (0xc001a43ea0) Stream removed, broadcasting: 1 I0330 21:25:57.071598 6 log.go:172] (0xc001758bb0) (0xc001f5d720) Stream removed, broadcasting: 3 I0330 21:25:57.071609 6 log.go:172] (0xc001758bb0) (0xc0027e0780) Stream removed, broadcasting: 5 Mar 30 21:25:57.071: INFO: Exec stderr: "" Mar 30 21:25:57.071: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4994 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 21:25:57.071: INFO: >>> kubeConfig: /root/.kube/config I0330 21:25:57.105760 6 log.go:172] (0xc0009ac630) (0xc001f5dae0) Create stream I0330 21:25:57.105786 6 log.go:172] (0xc0009ac630) (0xc001f5dae0) Stream added, broadcasting: 1 I0330 21:25:57.108015 6 log.go:172] (0xc0009ac630) Reply frame received for 1 I0330 21:25:57.108049 6 log.go:172] (0xc0009ac630) (0xc001f5db80) Create stream I0330 21:25:57.108061 6 log.go:172] (0xc0009ac630) (0xc001f5db80) Stream added, broadcasting: 3 I0330 21:25:57.109415 6 log.go:172] (0xc0009ac630) Reply frame received for 3 I0330 21:25:57.109476 6 log.go:172] (0xc0009ac630) (0xc0027e0820) Create stream I0330 21:25:57.109495 6 log.go:172] (0xc0009ac630) (0xc0027e0820) Stream added, broadcasting: 5 I0330 21:25:57.110504 6 log.go:172] (0xc0009ac630) Reply frame received for 5 I0330 21:25:57.169097 6 log.go:172] (0xc0009ac630) Data frame received for 5 I0330 21:25:57.169267 6 log.go:172] (0xc0027e0820) (5) Data frame handling I0330 21:25:57.169331 6 log.go:172] (0xc0009ac630) Data frame received for 3 I0330 21:25:57.169353 6 log.go:172] (0xc001f5db80) (3) Data frame handling I0330 21:25:57.169381 6 log.go:172] (0xc001f5db80) (3) Data frame sent I0330 21:25:57.169408 6 log.go:172] (0xc0009ac630) Data frame received for 3 I0330 21:25:57.169418 6 log.go:172] (0xc001f5db80) (3) Data frame handling I0330 21:25:57.170613 6 log.go:172] (0xc0009ac630) Data frame received for 1 I0330 21:25:57.170627 6 log.go:172] (0xc001f5dae0) (1) Data frame handling I0330 21:25:57.170636 6 log.go:172] (0xc001f5dae0) (1) Data frame sent I0330 21:25:57.170748 6 log.go:172] (0xc0009ac630) (0xc001f5dae0) Stream removed, broadcasting: 1 I0330 21:25:57.170785 6 log.go:172] (0xc0009ac630) Go away received I0330 21:25:57.170922 6 log.go:172] (0xc0009ac630) (0xc001f5dae0) Stream removed, broadcasting: 1 I0330 21:25:57.170956 6 log.go:172] (0xc0009ac630) (0xc001f5db80) Stream removed, broadcasting: 3 I0330 21:25:57.171035 6 log.go:172] (0xc0009ac630) (0xc0027e0820) Stream removed, broadcasting: 5 Mar 30 21:25:57.171: INFO: Exec stderr: "" Mar 30 21:25:57.171: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4994 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 21:25:57.171: INFO: >>> kubeConfig: /root/.kube/config I0330 21:25:57.206429 6 log.go:172] (0xc003d08420) (0xc0027e0a00) Create stream I0330 21:25:57.206457 6 log.go:172] (0xc003d08420) (0xc0027e0a00) Stream added, broadcasting: 1 I0330 21:25:57.208423 6 log.go:172] (0xc003d08420) Reply frame received for 1 I0330 21:25:57.208459 6 log.go:172] (0xc003d08420) (0xc0027e0dc0) Create stream I0330 21:25:57.208472 6 log.go:172] (0xc003d08420) (0xc0027e0dc0) Stream added, broadcasting: 3 I0330 21:25:57.209975 6 log.go:172] (0xc003d08420) Reply frame received for 3 I0330 21:25:57.210014 6 log.go:172] (0xc003d08420) (0xc0027e1040) Create stream I0330 21:25:57.210027 6 log.go:172] (0xc003d08420) (0xc0027e1040) Stream added, broadcasting: 5 I0330 21:25:57.210981 6 log.go:172] (0xc003d08420) Reply frame received for 5 I0330 21:25:57.272657 6 log.go:172] (0xc003d08420) Data frame received for 5 I0330 21:25:57.272699 6 log.go:172] (0xc0027e1040) (5) Data frame handling I0330 21:25:57.272722 6 log.go:172] (0xc003d08420) Data frame received for 3 I0330 21:25:57.272738 6 log.go:172] (0xc0027e0dc0) (3) Data frame handling I0330 21:25:57.272765 6 log.go:172] (0xc0027e0dc0) (3) Data frame sent I0330 21:25:57.272786 6 log.go:172] (0xc003d08420) Data frame received for 3 I0330 21:25:57.272796 6 log.go:172] (0xc0027e0dc0) (3) Data frame handling I0330 21:25:57.274355 6 log.go:172] (0xc003d08420) Data frame received for 1 I0330 21:25:57.274395 6 log.go:172] (0xc0027e0a00) (1) Data frame handling I0330 21:25:57.274412 6 log.go:172] (0xc0027e0a00) (1) Data frame sent I0330 21:25:57.274444 6 log.go:172] (0xc003d08420) (0xc0027e0a00) Stream removed, broadcasting: 1 I0330 21:25:57.274465 6 log.go:172] (0xc003d08420) Go away received I0330 21:25:57.274746 6 log.go:172] (0xc003d08420) (0xc0027e0a00) Stream removed, broadcasting: 1 I0330 21:25:57.274766 6 log.go:172] (0xc003d08420) (0xc0027e0dc0) Stream removed, broadcasting: 3 I0330 21:25:57.274778 6 log.go:172] (0xc003d08420) (0xc0027e1040) Stream removed, broadcasting: 5 Mar 30 21:25:57.274: INFO: Exec stderr: "" Mar 30 21:25:57.274: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4994 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 21:25:57.274: INFO: >>> kubeConfig: /root/.kube/config I0330 21:25:57.310379 6 log.go:172] (0xc003d08a50) (0xc0027e1400) Create stream I0330 21:25:57.310404 6 log.go:172] (0xc003d08a50) (0xc0027e1400) Stream added, broadcasting: 1 I0330 21:25:57.312550 6 log.go:172] (0xc003d08a50) Reply frame received for 1 I0330 21:25:57.312602 6 log.go:172] (0xc003d08a50) (0xc001a43f40) Create stream I0330 21:25:57.312627 6 log.go:172] (0xc003d08a50) (0xc001a43f40) Stream added, broadcasting: 3 I0330 21:25:57.313599 6 log.go:172] (0xc003d08a50) Reply frame received for 3 I0330 21:25:57.313643 6 log.go:172] (0xc003d08a50) (0xc001f5dc20) Create stream I0330 21:25:57.313659 6 log.go:172] (0xc003d08a50) (0xc001f5dc20) Stream added, broadcasting: 5 I0330 21:25:57.314845 6 log.go:172] (0xc003d08a50) Reply frame received for 5 I0330 21:25:57.395765 6 log.go:172] (0xc003d08a50) Data frame received for 5 I0330 21:25:57.395823 6 log.go:172] (0xc001f5dc20) (5) Data frame handling I0330 21:25:57.395857 6 log.go:172] (0xc003d08a50) Data frame received for 3 I0330 21:25:57.395886 6 log.go:172] (0xc001a43f40) (3) Data frame handling I0330 21:25:57.395906 6 log.go:172] (0xc001a43f40) (3) Data frame sent I0330 21:25:57.395931 6 log.go:172] (0xc003d08a50) Data frame received for 3 I0330 21:25:57.395941 6 log.go:172] (0xc001a43f40) (3) Data frame handling I0330 21:25:57.398976 6 log.go:172] (0xc003d08a50) Data frame received for 1 I0330 21:25:57.399011 6 log.go:172] (0xc0027e1400) (1) Data frame handling I0330 21:25:57.399029 6 log.go:172] (0xc0027e1400) (1) Data frame sent I0330 21:25:57.399458 6 log.go:172] (0xc003d08a50) (0xc0027e1400) Stream removed, broadcasting: 1 I0330 21:25:57.399572 6 log.go:172] (0xc003d08a50) (0xc0027e1400) Stream removed, broadcasting: 1 I0330 21:25:57.399596 6 log.go:172] (0xc003d08a50) (0xc001a43f40) Stream removed, broadcasting: 3 I0330 21:25:57.399612 6 log.go:172] (0xc003d08a50) (0xc001f5dc20) Stream removed, broadcasting: 5 Mar 30 21:25:57.399: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:25:57.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0330 21:25:57.400128 6 log.go:172] (0xc003d08a50) Go away received STEP: Destroying namespace "e2e-kubelet-etc-hosts-4994" for this suite. • [SLOW TEST:11.295 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1316,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:25:57.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-9c5cde8c-00ca-47b9-9aad-0f195de6701c in namespace container-probe-5622 Mar 30 21:26:01.508: INFO: Started pod test-webserver-9c5cde8c-00ca-47b9-9aad-0f195de6701c in namespace container-probe-5622 STEP: checking the pod's current state and verifying that restartCount is present Mar 30 21:26:01.511: INFO: Initial restart count of pod test-webserver-9c5cde8c-00ca-47b9-9aad-0f195de6701c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:30:02.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5622" for this suite. • [SLOW TEST:244.772 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:30:02.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 30 21:30:05.479: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:30:05.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5985" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1370,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:30:05.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 30 21:30:05.614: INFO: namespace kubectl-3466 Mar 30 21:30:05.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3466' Mar 30 21:30:05.949: INFO: stderr: "" Mar 30 21:30:05.949: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 30 21:30:06.953: INFO: Selector matched 1 pods for map[app:agnhost] Mar 30 21:30:06.953: INFO: Found 0 / 1 Mar 30 21:30:07.966: INFO: Selector matched 1 pods for map[app:agnhost] Mar 30 21:30:07.966: INFO: Found 0 / 1 Mar 30 21:30:08.954: INFO: Selector matched 1 pods for map[app:agnhost] Mar 30 21:30:08.954: INFO: Found 1 / 1 Mar 30 21:30:08.954: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 30 21:30:08.957: INFO: Selector matched 1 pods for map[app:agnhost] Mar 30 21:30:08.957: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 30 21:30:08.957: INFO: wait on agnhost-master startup in kubectl-3466 Mar 30 21:30:08.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-kzwnb agnhost-master --namespace=kubectl-3466' Mar 30 21:30:09.097: INFO: stderr: "" Mar 30 21:30:09.097: INFO: stdout: "Paused\n" STEP: exposing RC Mar 30 21:30:09.097: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3466' Mar 30 21:30:09.249: INFO: stderr: "" Mar 30 21:30:09.249: INFO: stdout: "service/rm2 exposed\n" Mar 30 21:30:09.264: INFO: Service rm2 in namespace kubectl-3466 found. STEP: exposing service Mar 30 21:30:11.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3466' Mar 30 21:30:11.401: INFO: stderr: "" Mar 30 21:30:11.401: INFO: stdout: "service/rm3 exposed\n" Mar 30 21:30:11.456: INFO: Service rm3 in namespace kubectl-3466 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:30:13.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3466" for this suite. • [SLOW TEST:7.938 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1295 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":71,"skipped":1386,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:30:13.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 30 21:30:13.530: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:30:20.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7349" for this suite. • [SLOW TEST:7.071 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":72,"skipped":1403,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:30:20.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:30:36.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4512" for this suite. • [SLOW TEST:16.318 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":73,"skipped":1457,"failed":0} [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:30:36.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-a6c999ff-9c51-47bb-a272-939aee1883fa STEP: Creating a pod to test consume configMaps Mar 30 21:30:36.936: INFO: Waiting up to 5m0s for pod "pod-configmaps-59c79cdb-8fa5-4557-a80e-87582afe4604" in namespace "configmap-6718" to be "success or failure" Mar 30 21:30:36.990: INFO: Pod "pod-configmaps-59c79cdb-8fa5-4557-a80e-87582afe4604": Phase="Pending", Reason="", readiness=false. Elapsed: 53.356685ms Mar 30 21:30:38.993: INFO: Pod "pod-configmaps-59c79cdb-8fa5-4557-a80e-87582afe4604": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057186577s Mar 30 21:30:41.003: INFO: Pod "pod-configmaps-59c79cdb-8fa5-4557-a80e-87582afe4604": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066621772s STEP: Saw pod success Mar 30 21:30:41.003: INFO: Pod "pod-configmaps-59c79cdb-8fa5-4557-a80e-87582afe4604" satisfied condition "success or failure" Mar 30 21:30:41.006: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-59c79cdb-8fa5-4557-a80e-87582afe4604 container configmap-volume-test: STEP: delete the pod Mar 30 21:30:41.031: INFO: Waiting for pod pod-configmaps-59c79cdb-8fa5-4557-a80e-87582afe4604 to disappear Mar 30 21:30:41.057: INFO: Pod pod-configmaps-59c79cdb-8fa5-4557-a80e-87582afe4604 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:30:41.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6718" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1457,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:30:41.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:30:41.152: INFO: (0) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 18.781268ms) Mar 30 21:30:41.155: INFO: (1) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.773646ms) Mar 30 21:30:41.159: INFO: (2) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.643114ms) Mar 30 21:30:41.162: INFO: (3) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.094782ms) Mar 30 21:30:41.165: INFO: (4) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.177238ms) Mar 30 21:30:41.169: INFO: (5) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.614039ms) Mar 30 21:30:41.172: INFO: (6) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.313188ms) Mar 30 21:30:41.176: INFO: (7) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.354594ms) Mar 30 21:30:41.180: INFO: (8) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.018133ms) Mar 30 21:30:41.183: INFO: (9) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.48644ms) Mar 30 21:30:41.188: INFO: (10) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.108584ms) Mar 30 21:30:41.191: INFO: (11) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.221379ms) Mar 30 21:30:41.212: INFO: (12) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 20.769617ms) Mar 30 21:30:41.215: INFO: (13) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.770957ms) Mar 30 21:30:41.219: INFO: (14) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.933344ms) Mar 30 21:30:41.223: INFO: (15) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.378331ms) Mar 30 21:30:41.226: INFO: (16) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.882841ms) Mar 30 21:30:41.229: INFO: (17) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.192125ms) Mar 30 21:30:41.232: INFO: (18) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.976248ms) Mar 30 21:30:41.235: INFO: (19) /api/v1/nodes/jerma-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.152358ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:30:41.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-6296" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":75,"skipped":1475,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:30:41.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:30:41.271: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 30 21:30:44.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3402 create -f -' Mar 30 21:30:47.150: INFO: stderr: "" Mar 30 21:30:47.150: INFO: stdout: "e2e-test-crd-publish-openapi-688-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 30 21:30:47.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3402 delete e2e-test-crd-publish-openapi-688-crds test-cr' Mar 30 21:30:47.260: INFO: stderr: "" Mar 30 21:30:47.260: INFO: stdout: "e2e-test-crd-publish-openapi-688-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Mar 30 21:30:47.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3402 apply -f -' Mar 30 21:30:47.504: INFO: stderr: "" Mar 30 21:30:47.504: INFO: stdout: "e2e-test-crd-publish-openapi-688-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Mar 30 21:30:47.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3402 delete e2e-test-crd-publish-openapi-688-crds test-cr' Mar 30 21:30:47.593: INFO: stderr: "" Mar 30 21:30:47.593: INFO: stdout: "e2e-test-crd-publish-openapi-688-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Mar 30 21:30:47.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-688-crds' Mar 30 21:30:47.858: INFO: stderr: "" Mar 30 21:30:47.858: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-688-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:30:50.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3402" for this suite. • [SLOW TEST:9.480 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":76,"skipped":1477,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:30:50.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Mar 30 21:30:50.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Mar 30 21:30:50.878: INFO: stderr: "" Mar 30 21:30:50.878: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:30:50.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4700" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":77,"skipped":1496,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:30:50.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3145 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3145 STEP: creating replication controller externalsvc in namespace services-3145 I0330 21:30:51.098046 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3145, replica count: 2 I0330 21:30:54.148430 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0330 21:30:57.148671 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Mar 30 21:30:57.193: INFO: Creating new exec pod Mar 30 21:31:01.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3145 execpodrb5pd -- /bin/sh -x -c nslookup clusterip-service' Mar 30 21:31:01.503: INFO: stderr: "I0330 21:31:01.398424 799 log.go:172] (0xc00067f080) (0xc0006bfc20) Create stream\nI0330 21:31:01.398485 799 log.go:172] (0xc00067f080) (0xc0006bfc20) Stream added, broadcasting: 1\nI0330 21:31:01.401242 799 log.go:172] (0xc00067f080) Reply frame received for 1\nI0330 21:31:01.401302 799 log.go:172] (0xc00067f080) (0xc000a6e000) Create stream\nI0330 21:31:01.401315 799 log.go:172] (0xc00067f080) (0xc000a6e000) Stream added, broadcasting: 3\nI0330 21:31:01.402495 799 log.go:172] (0xc00067f080) Reply frame received for 3\nI0330 21:31:01.402528 799 log.go:172] (0xc00067f080) (0xc0006bfd60) Create stream\nI0330 21:31:01.402539 799 log.go:172] (0xc00067f080) (0xc0006bfd60) Stream added, broadcasting: 5\nI0330 21:31:01.403596 799 log.go:172] (0xc00067f080) Reply frame received for 5\nI0330 21:31:01.490701 799 log.go:172] (0xc00067f080) Data frame received for 5\nI0330 21:31:01.490733 799 log.go:172] (0xc0006bfd60) (5) Data frame handling\nI0330 21:31:01.490751 799 log.go:172] (0xc0006bfd60) (5) Data frame sent\n+ nslookup clusterip-service\nI0330 21:31:01.495312 799 log.go:172] (0xc00067f080) Data frame received for 3\nI0330 21:31:01.495334 799 log.go:172] (0xc000a6e000) (3) Data frame handling\nI0330 21:31:01.495353 799 log.go:172] (0xc000a6e000) (3) Data frame sent\nI0330 21:31:01.496370 799 log.go:172] (0xc00067f080) Data frame received for 3\nI0330 21:31:01.496382 799 log.go:172] (0xc000a6e000) (3) Data frame handling\nI0330 21:31:01.496388 799 log.go:172] (0xc000a6e000) (3) Data frame sent\nI0330 21:31:01.497010 799 log.go:172] (0xc00067f080) Data frame received for 3\nI0330 21:31:01.497026 799 log.go:172] (0xc000a6e000) (3) Data frame handling\nI0330 21:31:01.497565 799 log.go:172] (0xc00067f080) Data frame received for 5\nI0330 21:31:01.497610 799 log.go:172] (0xc0006bfd60) (5) Data frame handling\nI0330 21:31:01.499118 799 log.go:172] (0xc00067f080) Data frame received for 1\nI0330 21:31:01.499202 799 log.go:172] (0xc0006bfc20) (1) Data frame handling\nI0330 21:31:01.499246 799 log.go:172] (0xc0006bfc20) (1) Data frame sent\nI0330 21:31:01.499291 799 log.go:172] (0xc00067f080) (0xc0006bfc20) Stream removed, broadcasting: 1\nI0330 21:31:01.499348 799 log.go:172] (0xc00067f080) Go away received\nI0330 21:31:01.499675 799 log.go:172] (0xc00067f080) (0xc0006bfc20) Stream removed, broadcasting: 1\nI0330 21:31:01.499693 799 log.go:172] (0xc00067f080) (0xc000a6e000) Stream removed, broadcasting: 3\nI0330 21:31:01.499705 799 log.go:172] (0xc00067f080) (0xc0006bfd60) Stream removed, broadcasting: 5\n" Mar 30 21:31:01.503: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3145.svc.cluster.local\tcanonical name = externalsvc.services-3145.svc.cluster.local.\nName:\texternalsvc.services-3145.svc.cluster.local\nAddress: 10.102.86.11\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3145, will wait for the garbage collector to delete the pods Mar 30 21:31:01.567: INFO: Deleting ReplicationController externalsvc took: 6.286688ms Mar 30 21:31:01.867: INFO: Terminating ReplicationController externalsvc pods took: 300.26503ms Mar 30 21:31:09.593: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:31:09.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3145" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:18.738 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":78,"skipped":1516,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:31:09.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:31:40.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5700" for this suite. STEP: Destroying namespace "nsdeletetest-790" for this suite. Mar 30 21:31:40.852: INFO: Namespace nsdeletetest-790 was already deleted STEP: Destroying namespace "nsdeletetest-1495" for this suite. • [SLOW TEST:31.231 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":79,"skipped":1537,"failed":0} S ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:31:40.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:31:40.918: INFO: Creating deployment "test-recreate-deployment" Mar 30 21:31:40.930: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Mar 30 21:31:40.944: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Mar 30 21:31:42.950: INFO: Waiting deployment "test-recreate-deployment" to complete Mar 30 21:31:42.952: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200700, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200700, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200701, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200700, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 21:31:44.956: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Mar 30 21:31:44.964: INFO: Updating deployment test-recreate-deployment Mar 30 21:31:44.964: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 30 21:31:45.430: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3227 /apis/apps/v1/namespaces/deployment-3227/deployments/test-recreate-deployment ddbc90d2-7c0b-49d0-ac4b-c7c50a5aa9d3 4054763 2 2020-03-30 21:31:40 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004811458 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-30 21:31:45 +0000 UTC,LastTransitionTime:2020-03-30 21:31:45 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-03-30 21:31:45 +0000 UTC,LastTransitionTime:2020-03-30 21:31:40 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Mar 30 21:31:45.443: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-3227 /apis/apps/v1/namespaces/deployment-3227/replicasets/test-recreate-deployment-5f94c574ff be5897d6-bbf0-4fab-a23c-fc7ab37c7017 4054761 1 2020-03-30 21:31:45 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment ddbc90d2-7c0b-49d0-ac4b-c7c50a5aa9d3 0xc004811987 0xc004811988}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004811a68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 30 21:31:45.443: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Mar 30 21:31:45.443: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-3227 /apis/apps/v1/namespaces/deployment-3227/replicasets/test-recreate-deployment-799c574856 a86b1809-45b4-407b-b635-7a5863c9b69f 4054752 2 2020-03-30 21:31:40 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment ddbc90d2-7c0b-49d0-ac4b-c7c50a5aa9d3 0xc004811ae7 0xc004811ae8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004811b88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 30 21:31:45.446: INFO: Pod "test-recreate-deployment-5f94c574ff-mwdk8" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-mwdk8 test-recreate-deployment-5f94c574ff- deployment-3227 /api/v1/namespaces/deployment-3227/pods/test-recreate-deployment-5f94c574ff-mwdk8 b8b5a439-f121-41cd-a77a-e6381b5bfebd 4054764 0 2020-03-30 21:31:45 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff be5897d6-bbf0-4fab-a23c-fc7ab37c7017 0xc00477a0c7 0xc00477a0c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rznd8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rznd8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rznd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:31:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:31:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:31:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:31:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-30 21:31:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:31:45.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3227" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":80,"skipped":1538,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:31:45.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0330 21:31:46.570939 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 30 21:31:46.571: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:31:46.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3538" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":81,"skipped":1545,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:31:46.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:31:47.076: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:31:51.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6880" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1569,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:31:51.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:31:51.327: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:31:52.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2835" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":83,"skipped":1609,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:31:52.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:31:52.628: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Mar 30 21:31:52.676: INFO: Pod name sample-pod: Found 0 pods out of 1 Mar 30 21:31:57.680: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 30 21:31:57.680: INFO: Creating deployment "test-rolling-update-deployment" Mar 30 21:31:57.683: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Mar 30 21:31:57.691: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Mar 30 21:31:59.706: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Mar 30 21:31:59.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200717, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200717, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200717, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200717, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 21:32:01.711: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 30 21:32:01.719: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7593 /apis/apps/v1/namespaces/deployment-7593/deployments/test-rolling-update-deployment ace9e579-43b6-401a-8658-4f86f0332da2 4054968 1 2020-03-30 21:31:57 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0045783f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-30 21:31:57 +0000 UTC,LastTransitionTime:2020-03-30 21:31:57 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-03-30 21:32:00 +0000 UTC,LastTransitionTime:2020-03-30 21:31:57 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 30 21:32:01.721: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-7593 /apis/apps/v1/namespaces/deployment-7593/replicasets/test-rolling-update-deployment-67cf4f6444 7d065f20-6702-4112-9d66-2eef41721235 4054955 1 2020-03-30 21:31:57 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment ace9e579-43b6-401a-8658-4f86f0332da2 0xc0045789b7 0xc0045789b8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004578a48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 30 21:32:01.721: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Mar 30 21:32:01.722: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7593 /apis/apps/v1/namespaces/deployment-7593/replicasets/test-rolling-update-controller 1d442238-4a52-4fa1-9e79-4964b7156de0 4054966 2 2020-03-30 21:31:52 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment ace9e579-43b6-401a-8658-4f86f0332da2 0xc00457889f 0xc0045788b0}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004578928 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 30 21:32:01.725: INFO: Pod "test-rolling-update-deployment-67cf4f6444-25tcl" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-25tcl test-rolling-update-deployment-67cf4f6444- deployment-7593 /api/v1/namespaces/deployment-7593/pods/test-rolling-update-deployment-67cf4f6444-25tcl 95a5eece-57d6-4b74-bf21-9f4170fa80d4 4054954 0 2020-03-30 21:31:57 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 7d065f20-6702-4112-9d66-2eef41721235 0xc0045dceb7 0xc0045dceb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f86gh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f86gh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f86gh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:31:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:32:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:32:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:31:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.184,StartTime:2020-03-30 21:31:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-30 21:31:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://2d26f3e3ca5f18b9735e7596666bcc5663aaa91d0d98c2f1a48b0028e7c9145f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.184,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:32:01.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7593" for this suite. • [SLOW TEST:9.171 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":84,"skipped":1659,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:32:01.732: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Mar 30 21:32:01.808: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4585 /api/v1/namespaces/watch-4585/configmaps/e2e-watch-test-watch-closed 9b335754-7e10-40c3-89a9-f51c60ae05fd 4054978 0 2020-03-30 21:32:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 30 21:32:01.808: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4585 /api/v1/namespaces/watch-4585/configmaps/e2e-watch-test-watch-closed 9b335754-7e10-40c3-89a9-f51c60ae05fd 4054979 0 2020-03-30 21:32:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Mar 30 21:32:01.820: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4585 /api/v1/namespaces/watch-4585/configmaps/e2e-watch-test-watch-closed 9b335754-7e10-40c3-89a9-f51c60ae05fd 4054980 0 2020-03-30 21:32:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 30 21:32:01.820: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4585 /api/v1/namespaces/watch-4585/configmaps/e2e-watch-test-watch-closed 9b335754-7e10-40c3-89a9-f51c60ae05fd 4054981 0 2020-03-30 21:32:01 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:32:01.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4585" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":85,"skipped":1660,"failed":0} SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:32:01.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6710 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-6710 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6710 Mar 30 21:32:01.922: INFO: Found 0 stateful pods, waiting for 1 Mar 30 21:32:11.927: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Mar 30 21:32:11.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6710 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 30 21:32:12.203: INFO: stderr: "I0330 21:32:12.063457 820 log.go:172] (0xc000ad2a50) (0xc0006a3b80) Create stream\nI0330 21:32:12.063506 820 log.go:172] (0xc000ad2a50) (0xc0006a3b80) Stream added, broadcasting: 1\nI0330 21:32:12.065892 820 log.go:172] (0xc000ad2a50) Reply frame received for 1\nI0330 21:32:12.065916 820 log.go:172] (0xc000ad2a50) (0xc000992000) Create stream\nI0330 21:32:12.065924 820 log.go:172] (0xc000ad2a50) (0xc000992000) Stream added, broadcasting: 3\nI0330 21:32:12.066961 820 log.go:172] (0xc000ad2a50) Reply frame received for 3\nI0330 21:32:12.067029 820 log.go:172] (0xc000ad2a50) (0xc000994000) Create stream\nI0330 21:32:12.067054 820 log.go:172] (0xc000ad2a50) (0xc000994000) Stream added, broadcasting: 5\nI0330 21:32:12.068094 820 log.go:172] (0xc000ad2a50) Reply frame received for 5\nI0330 21:32:12.164410 820 log.go:172] (0xc000ad2a50) Data frame received for 5\nI0330 21:32:12.164435 820 log.go:172] (0xc000994000) (5) Data frame handling\nI0330 21:32:12.164451 820 log.go:172] (0xc000994000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0330 21:32:12.194979 820 log.go:172] (0xc000ad2a50) Data frame received for 3\nI0330 21:32:12.195006 820 log.go:172] (0xc000992000) (3) Data frame handling\nI0330 21:32:12.195037 820 log.go:172] (0xc000992000) (3) Data frame sent\nI0330 21:32:12.195454 820 log.go:172] (0xc000ad2a50) Data frame received for 3\nI0330 21:32:12.195498 820 log.go:172] (0xc000992000) (3) Data frame handling\nI0330 21:32:12.195852 820 log.go:172] (0xc000ad2a50) Data frame received for 5\nI0330 21:32:12.195868 820 log.go:172] (0xc000994000) (5) Data frame handling\nI0330 21:32:12.197514 820 log.go:172] (0xc000ad2a50) Data frame received for 1\nI0330 21:32:12.197540 820 log.go:172] (0xc0006a3b80) (1) Data frame handling\nI0330 21:32:12.197553 820 log.go:172] (0xc0006a3b80) (1) Data frame sent\nI0330 21:32:12.197701 820 log.go:172] (0xc000ad2a50) (0xc0006a3b80) Stream removed, broadcasting: 1\nI0330 21:32:12.198021 820 log.go:172] (0xc000ad2a50) Go away received\nI0330 21:32:12.198266 820 log.go:172] (0xc000ad2a50) (0xc0006a3b80) Stream removed, broadcasting: 1\nI0330 21:32:12.198304 820 log.go:172] (0xc000ad2a50) (0xc000992000) Stream removed, broadcasting: 3\nI0330 21:32:12.198322 820 log.go:172] (0xc000ad2a50) (0xc000994000) Stream removed, broadcasting: 5\n" Mar 30 21:32:12.203: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 30 21:32:12.203: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 30 21:32:12.242: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 30 21:32:22.247: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 30 21:32:22.247: INFO: Waiting for statefulset status.replicas updated to 0 Mar 30 21:32:22.263: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999652s Mar 30 21:32:23.268: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.993789947s Mar 30 21:32:24.272: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.989544611s Mar 30 21:32:25.276: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.985264048s Mar 30 21:32:26.280: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.981182314s Mar 30 21:32:27.285: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.976799172s Mar 30 21:32:28.289: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.972143088s Mar 30 21:32:29.294: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.967890553s Mar 30 21:32:30.299: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.963302438s Mar 30 21:32:31.303: INFO: Verifying statefulset ss doesn't scale past 1 for another 958.760788ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6710 Mar 30 21:32:32.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6710 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 30 21:32:32.553: INFO: stderr: "I0330 21:32:32.447775 842 log.go:172] (0xc00053a6e0) (0xc0006cbb80) Create stream\nI0330 21:32:32.447827 842 log.go:172] (0xc00053a6e0) (0xc0006cbb80) Stream added, broadcasting: 1\nI0330 21:32:32.450249 842 log.go:172] (0xc00053a6e0) Reply frame received for 1\nI0330 21:32:32.450298 842 log.go:172] (0xc00053a6e0) (0xc0006cbe00) Create stream\nI0330 21:32:32.450316 842 log.go:172] (0xc00053a6e0) (0xc0006cbe00) Stream added, broadcasting: 3\nI0330 21:32:32.451229 842 log.go:172] (0xc00053a6e0) Reply frame received for 3\nI0330 21:32:32.451258 842 log.go:172] (0xc00053a6e0) (0xc0009ac000) Create stream\nI0330 21:32:32.451265 842 log.go:172] (0xc00053a6e0) (0xc0009ac000) Stream added, broadcasting: 5\nI0330 21:32:32.452125 842 log.go:172] (0xc00053a6e0) Reply frame received for 5\nI0330 21:32:32.541751 842 log.go:172] (0xc00053a6e0) Data frame received for 3\nI0330 21:32:32.541788 842 log.go:172] (0xc0006cbe00) (3) Data frame handling\nI0330 21:32:32.541818 842 log.go:172] (0xc0006cbe00) (3) Data frame sent\nI0330 21:32:32.542107 842 log.go:172] (0xc00053a6e0) Data frame received for 5\nI0330 21:32:32.542139 842 log.go:172] (0xc0009ac000) (5) Data frame handling\nI0330 21:32:32.542165 842 log.go:172] (0xc0009ac000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0330 21:32:32.547847 842 log.go:172] (0xc00053a6e0) Data frame received for 3\nI0330 21:32:32.547919 842 log.go:172] (0xc0006cbe00) (3) Data frame handling\nI0330 21:32:32.547950 842 log.go:172] (0xc00053a6e0) Data frame received for 5\nI0330 21:32:32.547960 842 log.go:172] (0xc0009ac000) (5) Data frame handling\nI0330 21:32:32.550116 842 log.go:172] (0xc00053a6e0) Data frame received for 1\nI0330 21:32:32.550133 842 log.go:172] (0xc0006cbb80) (1) Data frame handling\nI0330 21:32:32.550141 842 log.go:172] (0xc0006cbb80) (1) Data frame sent\nI0330 21:32:32.550151 842 log.go:172] (0xc00053a6e0) (0xc0006cbb80) Stream removed, broadcasting: 1\nI0330 21:32:32.550187 842 log.go:172] (0xc00053a6e0) Go away received\nI0330 21:32:32.550424 842 log.go:172] (0xc00053a6e0) (0xc0006cbb80) Stream removed, broadcasting: 1\nI0330 21:32:32.550437 842 log.go:172] (0xc00053a6e0) (0xc0006cbe00) Stream removed, broadcasting: 3\nI0330 21:32:32.550443 842 log.go:172] (0xc00053a6e0) (0xc0009ac000) Stream removed, broadcasting: 5\n" Mar 30 21:32:32.553: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 30 21:32:32.553: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 30 21:32:32.556: INFO: Found 1 stateful pods, waiting for 3 Mar 30 21:32:42.561: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 30 21:32:42.561: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 30 21:32:42.561: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Mar 30 21:32:42.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6710 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 30 21:32:42.779: INFO: stderr: "I0330 21:32:42.694215 865 log.go:172] (0xc0006b60b0) (0xc0008020a0) Create stream\nI0330 21:32:42.694312 865 log.go:172] (0xc0006b60b0) (0xc0008020a0) Stream added, broadcasting: 1\nI0330 21:32:42.696294 865 log.go:172] (0xc0006b60b0) Reply frame received for 1\nI0330 21:32:42.696341 865 log.go:172] (0xc0006b60b0) (0xc0007f0000) Create stream\nI0330 21:32:42.696357 865 log.go:172] (0xc0006b60b0) (0xc0007f0000) Stream added, broadcasting: 3\nI0330 21:32:42.697595 865 log.go:172] (0xc0006b60b0) Reply frame received for 3\nI0330 21:32:42.697659 865 log.go:172] (0xc0006b60b0) (0xc000802140) Create stream\nI0330 21:32:42.697684 865 log.go:172] (0xc0006b60b0) (0xc000802140) Stream added, broadcasting: 5\nI0330 21:32:42.698533 865 log.go:172] (0xc0006b60b0) Reply frame received for 5\nI0330 21:32:42.773257 865 log.go:172] (0xc0006b60b0) Data frame received for 5\nI0330 21:32:42.773298 865 log.go:172] (0xc000802140) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0330 21:32:42.773329 865 log.go:172] (0xc0006b60b0) Data frame received for 3\nI0330 21:32:42.773382 865 log.go:172] (0xc0007f0000) (3) Data frame handling\nI0330 21:32:42.773410 865 log.go:172] (0xc0007f0000) (3) Data frame sent\nI0330 21:32:42.773432 865 log.go:172] (0xc0006b60b0) Data frame received for 3\nI0330 21:32:42.773449 865 log.go:172] (0xc000802140) (5) Data frame sent\nI0330 21:32:42.773472 865 log.go:172] (0xc0006b60b0) Data frame received for 5\nI0330 21:32:42.773488 865 log.go:172] (0xc0007f0000) (3) Data frame handling\nI0330 21:32:42.773547 865 log.go:172] (0xc000802140) (5) Data frame handling\nI0330 21:32:42.775006 865 log.go:172] (0xc0006b60b0) Data frame received for 1\nI0330 21:32:42.775043 865 log.go:172] (0xc0008020a0) (1) Data frame handling\nI0330 21:32:42.775064 865 log.go:172] (0xc0008020a0) (1) Data frame sent\nI0330 21:32:42.775089 865 log.go:172] (0xc0006b60b0) (0xc0008020a0) Stream removed, broadcasting: 1\nI0330 21:32:42.775115 865 log.go:172] (0xc0006b60b0) Go away received\nI0330 21:32:42.775391 865 log.go:172] (0xc0006b60b0) (0xc0008020a0) Stream removed, broadcasting: 1\nI0330 21:32:42.775409 865 log.go:172] (0xc0006b60b0) (0xc0007f0000) Stream removed, broadcasting: 3\nI0330 21:32:42.775420 865 log.go:172] (0xc0006b60b0) (0xc000802140) Stream removed, broadcasting: 5\n" Mar 30 21:32:42.779: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 30 21:32:42.779: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 30 21:32:42.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6710 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 30 21:32:43.008: INFO: stderr: "I0330 21:32:42.914548 885 log.go:172] (0xc000bb9080) (0xc000afa640) Create stream\nI0330 21:32:42.914602 885 log.go:172] (0xc000bb9080) (0xc000afa640) Stream added, broadcasting: 1\nI0330 21:32:42.916677 885 log.go:172] (0xc000bb9080) Reply frame received for 1\nI0330 21:32:42.916743 885 log.go:172] (0xc000bb9080) (0xc000ad60a0) Create stream\nI0330 21:32:42.916757 885 log.go:172] (0xc000bb9080) (0xc000ad60a0) Stream added, broadcasting: 3\nI0330 21:32:42.918041 885 log.go:172] (0xc000bb9080) Reply frame received for 3\nI0330 21:32:42.918083 885 log.go:172] (0xc000bb9080) (0xc000a0c500) Create stream\nI0330 21:32:42.918104 885 log.go:172] (0xc000bb9080) (0xc000a0c500) Stream added, broadcasting: 5\nI0330 21:32:42.919059 885 log.go:172] (0xc000bb9080) Reply frame received for 5\nI0330 21:32:42.968692 885 log.go:172] (0xc000bb9080) Data frame received for 5\nI0330 21:32:42.968728 885 log.go:172] (0xc000a0c500) (5) Data frame handling\nI0330 21:32:42.968756 885 log.go:172] (0xc000a0c500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0330 21:32:42.999371 885 log.go:172] (0xc000bb9080) Data frame received for 3\nI0330 21:32:42.999419 885 log.go:172] (0xc000ad60a0) (3) Data frame handling\nI0330 21:32:42.999453 885 log.go:172] (0xc000ad60a0) (3) Data frame sent\nI0330 21:32:42.999634 885 log.go:172] (0xc000bb9080) Data frame received for 3\nI0330 21:32:42.999674 885 log.go:172] (0xc000ad60a0) (3) Data frame handling\nI0330 21:32:42.999692 885 log.go:172] (0xc000bb9080) Data frame received for 5\nI0330 21:32:42.999732 885 log.go:172] (0xc000a0c500) (5) Data frame handling\nI0330 21:32:43.002076 885 log.go:172] (0xc000bb9080) Data frame received for 1\nI0330 21:32:43.002090 885 log.go:172] (0xc000afa640) (1) Data frame handling\nI0330 21:32:43.002096 885 log.go:172] (0xc000afa640) (1) Data frame sent\nI0330 21:32:43.002104 885 log.go:172] (0xc000bb9080) (0xc000afa640) Stream removed, broadcasting: 1\nI0330 21:32:43.002123 885 log.go:172] (0xc000bb9080) Go away received\nI0330 21:32:43.003678 885 log.go:172] (0xc000bb9080) (0xc000afa640) Stream removed, broadcasting: 1\nI0330 21:32:43.003731 885 log.go:172] (0xc000bb9080) (0xc000ad60a0) Stream removed, broadcasting: 3\nI0330 21:32:43.003778 885 log.go:172] (0xc000bb9080) (0xc000a0c500) Stream removed, broadcasting: 5\n" Mar 30 21:32:43.008: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 30 21:32:43.008: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 30 21:32:43.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6710 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 30 21:32:43.274: INFO: stderr: "I0330 21:32:43.148265 908 log.go:172] (0xc0003d9130) (0xc0008f20a0) Create stream\nI0330 21:32:43.148320 908 log.go:172] (0xc0003d9130) (0xc0008f20a0) Stream added, broadcasting: 1\nI0330 21:32:43.150389 908 log.go:172] (0xc0003d9130) Reply frame received for 1\nI0330 21:32:43.150425 908 log.go:172] (0xc0003d9130) (0xc00065db80) Create stream\nI0330 21:32:43.150438 908 log.go:172] (0xc0003d9130) (0xc00065db80) Stream added, broadcasting: 3\nI0330 21:32:43.151169 908 log.go:172] (0xc0003d9130) Reply frame received for 3\nI0330 21:32:43.151198 908 log.go:172] (0xc0003d9130) (0xc0008f2140) Create stream\nI0330 21:32:43.151212 908 log.go:172] (0xc0003d9130) (0xc0008f2140) Stream added, broadcasting: 5\nI0330 21:32:43.152027 908 log.go:172] (0xc0003d9130) Reply frame received for 5\nI0330 21:32:43.220364 908 log.go:172] (0xc0003d9130) Data frame received for 5\nI0330 21:32:43.220390 908 log.go:172] (0xc0008f2140) (5) Data frame handling\nI0330 21:32:43.220408 908 log.go:172] (0xc0008f2140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0330 21:32:43.267146 908 log.go:172] (0xc0003d9130) Data frame received for 3\nI0330 21:32:43.267244 908 log.go:172] (0xc00065db80) (3) Data frame handling\nI0330 21:32:43.267297 908 log.go:172] (0xc00065db80) (3) Data frame sent\nI0330 21:32:43.267370 908 log.go:172] (0xc0003d9130) Data frame received for 3\nI0330 21:32:43.267392 908 log.go:172] (0xc00065db80) (3) Data frame handling\nI0330 21:32:43.267665 908 log.go:172] (0xc0003d9130) Data frame received for 5\nI0330 21:32:43.267676 908 log.go:172] (0xc0008f2140) (5) Data frame handling\nI0330 21:32:43.269023 908 log.go:172] (0xc0003d9130) Data frame received for 1\nI0330 21:32:43.269038 908 log.go:172] (0xc0008f20a0) (1) Data frame handling\nI0330 21:32:43.269044 908 log.go:172] (0xc0008f20a0) (1) Data frame sent\nI0330 21:32:43.269055 908 log.go:172] (0xc0003d9130) (0xc0008f20a0) Stream removed, broadcasting: 1\nI0330 21:32:43.269265 908 log.go:172] (0xc0003d9130) Go away received\nI0330 21:32:43.269481 908 log.go:172] (0xc0003d9130) (0xc0008f20a0) Stream removed, broadcasting: 1\nI0330 21:32:43.269500 908 log.go:172] (0xc0003d9130) (0xc00065db80) Stream removed, broadcasting: 3\nI0330 21:32:43.269508 908 log.go:172] (0xc0003d9130) (0xc0008f2140) Stream removed, broadcasting: 5\n" Mar 30 21:32:43.274: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 30 21:32:43.274: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 30 21:32:43.274: INFO: Waiting for statefulset status.replicas updated to 0 Mar 30 21:32:43.277: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 30 21:32:53.289: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 30 21:32:53.289: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 30 21:32:53.289: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 30 21:32:53.301: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999733s Mar 30 21:32:54.306: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993576469s Mar 30 21:32:55.310: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988605776s Mar 30 21:32:56.315: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.984797343s Mar 30 21:32:57.319: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.979870491s Mar 30 21:32:58.323: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.976000856s Mar 30 21:32:59.331: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.971405232s Mar 30 21:33:00.336: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.963267482s Mar 30 21:33:01.342: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.958103405s Mar 30 21:33:02.347: INFO: Verifying statefulset ss doesn't scale past 3 for another 953.147452ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6710 Mar 30 21:33:03.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6710 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 30 21:33:03.582: INFO: stderr: "I0330 21:33:03.479191 930 log.go:172] (0xc0000f5550) (0xc000685cc0) Create stream\nI0330 21:33:03.479242 930 log.go:172] (0xc0000f5550) (0xc000685cc0) Stream added, broadcasting: 1\nI0330 21:33:03.481315 930 log.go:172] (0xc0000f5550) Reply frame received for 1\nI0330 21:33:03.481372 930 log.go:172] (0xc0000f5550) (0xc00002e000) Create stream\nI0330 21:33:03.481389 930 log.go:172] (0xc0000f5550) (0xc00002e000) Stream added, broadcasting: 3\nI0330 21:33:03.482103 930 log.go:172] (0xc0000f5550) Reply frame received for 3\nI0330 21:33:03.482137 930 log.go:172] (0xc0000f5550) (0xc000226000) Create stream\nI0330 21:33:03.482156 930 log.go:172] (0xc0000f5550) (0xc000226000) Stream added, broadcasting: 5\nI0330 21:33:03.482839 930 log.go:172] (0xc0000f5550) Reply frame received for 5\nI0330 21:33:03.576329 930 log.go:172] (0xc0000f5550) Data frame received for 3\nI0330 21:33:03.576352 930 log.go:172] (0xc00002e000) (3) Data frame handling\nI0330 21:33:03.576360 930 log.go:172] (0xc00002e000) (3) Data frame sent\nI0330 21:33:03.576366 930 log.go:172] (0xc0000f5550) Data frame received for 3\nI0330 21:33:03.576378 930 log.go:172] (0xc00002e000) (3) Data frame handling\nI0330 21:33:03.576404 930 log.go:172] (0xc0000f5550) Data frame received for 5\nI0330 21:33:03.576454 930 log.go:172] (0xc000226000) (5) Data frame handling\nI0330 21:33:03.576497 930 log.go:172] (0xc000226000) (5) Data frame sent\nI0330 21:33:03.576523 930 log.go:172] (0xc0000f5550) Data frame received for 5\nI0330 21:33:03.576545 930 log.go:172] (0xc000226000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0330 21:33:03.577949 930 log.go:172] (0xc0000f5550) Data frame received for 1\nI0330 21:33:03.577972 930 log.go:172] (0xc000685cc0) (1) Data frame handling\nI0330 21:33:03.577991 930 log.go:172] (0xc000685cc0) (1) Data frame sent\nI0330 21:33:03.578046 930 log.go:172] (0xc0000f5550) (0xc000685cc0) Stream removed, broadcasting: 1\nI0330 21:33:03.578097 930 log.go:172] (0xc0000f5550) Go away received\nI0330 21:33:03.578463 930 log.go:172] (0xc0000f5550) (0xc000685cc0) Stream removed, broadcasting: 1\nI0330 21:33:03.578496 930 log.go:172] (0xc0000f5550) (0xc00002e000) Stream removed, broadcasting: 3\nI0330 21:33:03.578516 930 log.go:172] (0xc0000f5550) (0xc000226000) Stream removed, broadcasting: 5\n" Mar 30 21:33:03.582: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 30 21:33:03.582: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 30 21:33:03.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6710 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 30 21:33:03.794: INFO: stderr: "I0330 21:33:03.720429 952 log.go:172] (0xc000978bb0) (0xc0009c2280) Create stream\nI0330 21:33:03.720487 952 log.go:172] (0xc000978bb0) (0xc0009c2280) Stream added, broadcasting: 1\nI0330 21:33:03.722910 952 log.go:172] (0xc000978bb0) Reply frame received for 1\nI0330 21:33:03.722944 952 log.go:172] (0xc000978bb0) (0xc0007c20a0) Create stream\nI0330 21:33:03.722952 952 log.go:172] (0xc000978bb0) (0xc0007c20a0) Stream added, broadcasting: 3\nI0330 21:33:03.724030 952 log.go:172] (0xc000978bb0) Reply frame received for 3\nI0330 21:33:03.724066 952 log.go:172] (0xc000978bb0) (0xc00093a000) Create stream\nI0330 21:33:03.724085 952 log.go:172] (0xc000978bb0) (0xc00093a000) Stream added, broadcasting: 5\nI0330 21:33:03.726019 952 log.go:172] (0xc000978bb0) Reply frame received for 5\nI0330 21:33:03.787940 952 log.go:172] (0xc000978bb0) Data frame received for 3\nI0330 21:33:03.787969 952 log.go:172] (0xc0007c20a0) (3) Data frame handling\nI0330 21:33:03.787984 952 log.go:172] (0xc0007c20a0) (3) Data frame sent\nI0330 21:33:03.788034 952 log.go:172] (0xc000978bb0) Data frame received for 5\nI0330 21:33:03.788075 952 log.go:172] (0xc00093a000) (5) Data frame handling\nI0330 21:33:03.788090 952 log.go:172] (0xc00093a000) (5) Data frame sent\nI0330 21:33:03.788112 952 log.go:172] (0xc000978bb0) Data frame received for 5\nI0330 21:33:03.788124 952 log.go:172] (0xc00093a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0330 21:33:03.788160 952 log.go:172] (0xc000978bb0) Data frame received for 3\nI0330 21:33:03.788189 952 log.go:172] (0xc0007c20a0) (3) Data frame handling\nI0330 21:33:03.790113 952 log.go:172] (0xc000978bb0) Data frame received for 1\nI0330 21:33:03.790139 952 log.go:172] (0xc0009c2280) (1) Data frame handling\nI0330 21:33:03.790152 952 log.go:172] (0xc0009c2280) (1) Data frame sent\nI0330 21:33:03.790171 952 log.go:172] (0xc000978bb0) (0xc0009c2280) Stream removed, broadcasting: 1\nI0330 21:33:03.790191 952 log.go:172] (0xc000978bb0) Go away received\nI0330 21:33:03.790569 952 log.go:172] (0xc000978bb0) (0xc0009c2280) Stream removed, broadcasting: 1\nI0330 21:33:03.790588 952 log.go:172] (0xc000978bb0) (0xc0007c20a0) Stream removed, broadcasting: 3\nI0330 21:33:03.790599 952 log.go:172] (0xc000978bb0) (0xc00093a000) Stream removed, broadcasting: 5\n" Mar 30 21:33:03.794: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 30 21:33:03.794: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 30 21:33:03.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6710 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 30 21:33:04.008: INFO: stderr: "I0330 21:33:03.923568 975 log.go:172] (0xc00057a6e0) (0xc000687b80) Create stream\nI0330 21:33:03.923624 975 log.go:172] (0xc00057a6e0) (0xc000687b80) Stream added, broadcasting: 1\nI0330 21:33:03.926475 975 log.go:172] (0xc00057a6e0) Reply frame received for 1\nI0330 21:33:03.926514 975 log.go:172] (0xc00057a6e0) (0xc0009c0000) Create stream\nI0330 21:33:03.926527 975 log.go:172] (0xc00057a6e0) (0xc0009c0000) Stream added, broadcasting: 3\nI0330 21:33:03.927594 975 log.go:172] (0xc00057a6e0) Reply frame received for 3\nI0330 21:33:03.927664 975 log.go:172] (0xc00057a6e0) (0xc000687d60) Create stream\nI0330 21:33:03.927685 975 log.go:172] (0xc00057a6e0) (0xc000687d60) Stream added, broadcasting: 5\nI0330 21:33:03.928725 975 log.go:172] (0xc00057a6e0) Reply frame received for 5\nI0330 21:33:04.001005 975 log.go:172] (0xc00057a6e0) Data frame received for 3\nI0330 21:33:04.001029 975 log.go:172] (0xc0009c0000) (3) Data frame handling\nI0330 21:33:04.001038 975 log.go:172] (0xc0009c0000) (3) Data frame sent\nI0330 21:33:04.001044 975 log.go:172] (0xc00057a6e0) Data frame received for 3\nI0330 21:33:04.001049 975 log.go:172] (0xc0009c0000) (3) Data frame handling\nI0330 21:33:04.001073 975 log.go:172] (0xc00057a6e0) Data frame received for 5\nI0330 21:33:04.001084 975 log.go:172] (0xc000687d60) (5) Data frame handling\nI0330 21:33:04.001095 975 log.go:172] (0xc000687d60) (5) Data frame sent\nI0330 21:33:04.001105 975 log.go:172] (0xc00057a6e0) Data frame received for 5\nI0330 21:33:04.001198 975 log.go:172] (0xc000687d60) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0330 21:33:04.002885 975 log.go:172] (0xc00057a6e0) Data frame received for 1\nI0330 21:33:04.002910 975 log.go:172] (0xc000687b80) (1) Data frame handling\nI0330 21:33:04.002926 975 log.go:172] (0xc000687b80) (1) Data frame sent\nI0330 21:33:04.003138 975 log.go:172] (0xc00057a6e0) (0xc000687b80) Stream removed, broadcasting: 1\nI0330 21:33:04.003181 975 log.go:172] (0xc00057a6e0) Go away received\nI0330 21:33:04.003519 975 log.go:172] (0xc00057a6e0) (0xc000687b80) Stream removed, broadcasting: 1\nI0330 21:33:04.003533 975 log.go:172] (0xc00057a6e0) (0xc0009c0000) Stream removed, broadcasting: 3\nI0330 21:33:04.003540 975 log.go:172] (0xc00057a6e0) (0xc000687d60) Stream removed, broadcasting: 5\n" Mar 30 21:33:04.008: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 30 21:33:04.008: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 30 21:33:04.008: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 30 21:33:24.027: INFO: Deleting all statefulset in ns statefulset-6710 Mar 30 21:33:24.031: INFO: Scaling statefulset ss to 0 Mar 30 21:33:24.039: INFO: Waiting for statefulset status.replicas updated to 0 Mar 30 21:33:24.041: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:33:24.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6710" for this suite. • [SLOW TEST:82.225 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":86,"skipped":1665,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:33:24.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 30 21:33:24.143: INFO: Waiting up to 5m0s for pod "pod-2ffe90da-338f-4645-a346-5e997f9fef98" in namespace "emptydir-6156" to be "success or failure" Mar 30 21:33:24.151: INFO: Pod "pod-2ffe90da-338f-4645-a346-5e997f9fef98": Phase="Pending", Reason="", readiness=false. Elapsed: 8.254896ms Mar 30 21:33:26.158: INFO: Pod "pod-2ffe90da-338f-4645-a346-5e997f9fef98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015065867s Mar 30 21:33:28.162: INFO: Pod "pod-2ffe90da-338f-4645-a346-5e997f9fef98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019233202s STEP: Saw pod success Mar 30 21:33:28.162: INFO: Pod "pod-2ffe90da-338f-4645-a346-5e997f9fef98" satisfied condition "success or failure" Mar 30 21:33:28.165: INFO: Trying to get logs from node jerma-worker pod pod-2ffe90da-338f-4645-a346-5e997f9fef98 container test-container: STEP: delete the pod Mar 30 21:33:28.201: INFO: Waiting for pod pod-2ffe90da-338f-4645-a346-5e997f9fef98 to disappear Mar 30 21:33:28.216: INFO: Pod pod-2ffe90da-338f-4645-a346-5e997f9fef98 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:33:28.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6156" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1668,"failed":0} ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:33:28.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-94311477-ec37-4a21-9b10-80ce52679997 STEP: Creating a pod to test consume secrets Mar 30 21:33:28.316: INFO: Waiting up to 5m0s for pod "pod-secrets-e170222f-0e74-448d-9bc6-71b812e9ad80" in namespace "secrets-4015" to be "success or failure" Mar 30 21:33:28.319: INFO: Pod "pod-secrets-e170222f-0e74-448d-9bc6-71b812e9ad80": Phase="Pending", Reason="", readiness=false. Elapsed: 3.387937ms Mar 30 21:33:30.323: INFO: Pod "pod-secrets-e170222f-0e74-448d-9bc6-71b812e9ad80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006889919s Mar 30 21:33:32.327: INFO: Pod "pod-secrets-e170222f-0e74-448d-9bc6-71b812e9ad80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011381561s STEP: Saw pod success Mar 30 21:33:32.327: INFO: Pod "pod-secrets-e170222f-0e74-448d-9bc6-71b812e9ad80" satisfied condition "success or failure" Mar 30 21:33:32.330: INFO: Trying to get logs from node jerma-worker pod pod-secrets-e170222f-0e74-448d-9bc6-71b812e9ad80 container secret-volume-test: STEP: delete the pod Mar 30 21:33:32.357: INFO: Waiting for pod pod-secrets-e170222f-0e74-448d-9bc6-71b812e9ad80 to disappear Mar 30 21:33:32.374: INFO: Pod pod-secrets-e170222f-0e74-448d-9bc6-71b812e9ad80 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:33:32.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4015" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1668,"failed":0} SSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:33:32.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Mar 30 21:33:32.460: INFO: Waiting up to 5m0s for pod "var-expansion-e81891a8-5d30-4cd4-8a1f-829e2fb5b814" in namespace "var-expansion-7513" to be "success or failure" Mar 30 21:33:32.481: INFO: Pod "var-expansion-e81891a8-5d30-4cd4-8a1f-829e2fb5b814": Phase="Pending", Reason="", readiness=false. Elapsed: 21.62776ms Mar 30 21:33:34.485: INFO: Pod "var-expansion-e81891a8-5d30-4cd4-8a1f-829e2fb5b814": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025564728s Mar 30 21:33:36.489: INFO: Pod "var-expansion-e81891a8-5d30-4cd4-8a1f-829e2fb5b814": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029434992s STEP: Saw pod success Mar 30 21:33:36.489: INFO: Pod "var-expansion-e81891a8-5d30-4cd4-8a1f-829e2fb5b814" satisfied condition "success or failure" Mar 30 21:33:36.492: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-e81891a8-5d30-4cd4-8a1f-829e2fb5b814 container dapi-container: STEP: delete the pod Mar 30 21:33:36.543: INFO: Waiting for pod var-expansion-e81891a8-5d30-4cd4-8a1f-829e2fb5b814 to disappear Mar 30 21:33:36.555: INFO: Pod var-expansion-e81891a8-5d30-4cd4-8a1f-829e2fb5b814 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:33:36.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7513" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":89,"skipped":1673,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:33:36.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 30 21:33:37.081: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 30 21:33:39.092: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200817, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200817, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200817, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200817, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 30 21:33:42.129: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:33:42.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-694-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:33:43.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5381" for this suite. STEP: Destroying namespace "webhook-5381-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.887 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":90,"skipped":1681,"failed":0} SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:33:43.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Mar 30 21:33:43.498: INFO: Waiting up to 5m0s for pod "pod-8611e162-bc75-4e5e-98b7-4f0665ae4470" in namespace "emptydir-3116" to be "success or failure" Mar 30 21:33:43.501: INFO: Pod "pod-8611e162-bc75-4e5e-98b7-4f0665ae4470": Phase="Pending", Reason="", readiness=false. Elapsed: 3.822056ms Mar 30 21:33:45.505: INFO: Pod "pod-8611e162-bc75-4e5e-98b7-4f0665ae4470": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007462756s Mar 30 21:33:47.509: INFO: Pod "pod-8611e162-bc75-4e5e-98b7-4f0665ae4470": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011666004s STEP: Saw pod success Mar 30 21:33:47.509: INFO: Pod "pod-8611e162-bc75-4e5e-98b7-4f0665ae4470" satisfied condition "success or failure" Mar 30 21:33:47.512: INFO: Trying to get logs from node jerma-worker pod pod-8611e162-bc75-4e5e-98b7-4f0665ae4470 container test-container: STEP: delete the pod Mar 30 21:33:47.549: INFO: Waiting for pod pod-8611e162-bc75-4e5e-98b7-4f0665ae4470 to disappear Mar 30 21:33:47.555: INFO: Pod pod-8611e162-bc75-4e5e-98b7-4f0665ae4470 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:33:47.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3116" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1687,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:33:47.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 30 21:33:47.616: INFO: PodSpec: initContainers in spec.initContainers Mar 30 21:34:36.132: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-8cb922be-55c4-4952-ab0d-9811b4609ee3", GenerateName:"", Namespace:"init-container-9827", SelfLink:"/api/v1/namespaces/init-container-9827/pods/pod-init-8cb922be-55c4-4952-ab0d-9811b4609ee3", UID:"32136071-0c5f-4a58-b204-561f1983c415", ResourceVersion:"4055868", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721200827, loc:(*time.Location)(0x7d83a80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"616968082"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-p8q9j", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0065bbd40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-p8q9j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-p8q9j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-p8q9j", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0045dd198), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002407da0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0045dd220)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0045dd240)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0045dd248), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0045dd24c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200827, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200827, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200827, loc:(*time.Location)(0x7d83a80)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200827, loc:(*time.Location)(0x7d83a80)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.240", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.240"}}, StartTime:(*v1.Time)(0xc0053127a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0019b6f50)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0019b6fc0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://ed291de4e41e3ccf94a1646e5b371394e072b06b99d397d908915f432d919ee9", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0053127e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0053127c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0045dd2cf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:34:36.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9827" for this suite. • [SLOW TEST:48.634 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":92,"skipped":1698,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:34:36.198: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 30 21:34:36.247: INFO: Waiting up to 5m0s for pod "downwardapi-volume-18e05c61-0240-4d93-ad99-c1204d815b5e" in namespace "downward-api-3281" to be "success or failure" Mar 30 21:34:36.251: INFO: Pod "downwardapi-volume-18e05c61-0240-4d93-ad99-c1204d815b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.211916ms Mar 30 21:34:38.263: INFO: Pod "downwardapi-volume-18e05c61-0240-4d93-ad99-c1204d815b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01592666s Mar 30 21:34:40.268: INFO: Pod "downwardapi-volume-18e05c61-0240-4d93-ad99-c1204d815b5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020259933s STEP: Saw pod success Mar 30 21:34:40.268: INFO: Pod "downwardapi-volume-18e05c61-0240-4d93-ad99-c1204d815b5e" satisfied condition "success or failure" Mar 30 21:34:40.271: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-18e05c61-0240-4d93-ad99-c1204d815b5e container client-container: STEP: delete the pod Mar 30 21:34:40.295: INFO: Waiting for pod downwardapi-volume-18e05c61-0240-4d93-ad99-c1204d815b5e to disappear Mar 30 21:34:40.299: INFO: Pod downwardapi-volume-18e05c61-0240-4d93-ad99-c1204d815b5e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:34:40.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3281" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1721,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:34:40.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Mar 30 21:34:40.432: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1875 /api/v1/namespaces/watch-1875/configmaps/e2e-watch-test-label-changed 16b50d07-0cde-496e-9b6e-26fb9dc35e80 4055901 0 2020-03-30 21:34:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 30 21:34:40.432: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1875 /api/v1/namespaces/watch-1875/configmaps/e2e-watch-test-label-changed 16b50d07-0cde-496e-9b6e-26fb9dc35e80 4055902 0 2020-03-30 21:34:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 30 21:34:40.432: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1875 /api/v1/namespaces/watch-1875/configmaps/e2e-watch-test-label-changed 16b50d07-0cde-496e-9b6e-26fb9dc35e80 4055903 0 2020-03-30 21:34:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Mar 30 21:34:50.462: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1875 /api/v1/namespaces/watch-1875/configmaps/e2e-watch-test-label-changed 16b50d07-0cde-496e-9b6e-26fb9dc35e80 4055954 0 2020-03-30 21:34:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 30 21:34:50.462: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1875 /api/v1/namespaces/watch-1875/configmaps/e2e-watch-test-label-changed 16b50d07-0cde-496e-9b6e-26fb9dc35e80 4055955 0 2020-03-30 21:34:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Mar 30 21:34:50.462: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-1875 /api/v1/namespaces/watch-1875/configmaps/e2e-watch-test-label-changed 16b50d07-0cde-496e-9b6e-26fb9dc35e80 4055956 0 2020-03-30 21:34:40 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:34:50.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1875" for this suite. • [SLOW TEST:10.164 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":94,"skipped":1724,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:34:50.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 30 21:34:50.564: INFO: Waiting up to 5m0s for pod "downwardapi-volume-064baa30-ae36-43a4-8ac6-16fa73570b91" in namespace "downward-api-5714" to be "success or failure" Mar 30 21:34:50.573: INFO: Pod "downwardapi-volume-064baa30-ae36-43a4-8ac6-16fa73570b91": Phase="Pending", Reason="", readiness=false. Elapsed: 9.440184ms Mar 30 21:34:52.580: INFO: Pod "downwardapi-volume-064baa30-ae36-43a4-8ac6-16fa73570b91": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016465755s Mar 30 21:34:54.585: INFO: Pod "downwardapi-volume-064baa30-ae36-43a4-8ac6-16fa73570b91": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020663264s STEP: Saw pod success Mar 30 21:34:54.585: INFO: Pod "downwardapi-volume-064baa30-ae36-43a4-8ac6-16fa73570b91" satisfied condition "success or failure" Mar 30 21:34:54.588: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-064baa30-ae36-43a4-8ac6-16fa73570b91 container client-container: STEP: delete the pod Mar 30 21:34:54.605: INFO: Waiting for pod downwardapi-volume-064baa30-ae36-43a4-8ac6-16fa73570b91 to disappear Mar 30 21:34:54.623: INFO: Pod downwardapi-volume-064baa30-ae36-43a4-8ac6-16fa73570b91 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:34:54.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5714" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1729,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:34:54.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Mar 30 21:34:54.694: INFO: Pod name pod-release: Found 0 pods out of 1 Mar 30 21:34:59.712: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:34:59.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7831" for this suite. • [SLOW TEST:5.208 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":96,"skipped":1736,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:34:59.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:35:16.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2908" for this suite. • [SLOW TEST:16.209 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":97,"skipped":1759,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:35:16.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 30 21:35:16.103: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9a0500c2-c30e-44c2-bb25-8ff3108ec081" in namespace "projected-3883" to be "success or failure" Mar 30 21:35:16.107: INFO: Pod "downwardapi-volume-9a0500c2-c30e-44c2-bb25-8ff3108ec081": Phase="Pending", Reason="", readiness=false. Elapsed: 4.351617ms Mar 30 21:35:18.111: INFO: Pod "downwardapi-volume-9a0500c2-c30e-44c2-bb25-8ff3108ec081": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008781256s Mar 30 21:35:20.116: INFO: Pod "downwardapi-volume-9a0500c2-c30e-44c2-bb25-8ff3108ec081": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012999292s STEP: Saw pod success Mar 30 21:35:20.116: INFO: Pod "downwardapi-volume-9a0500c2-c30e-44c2-bb25-8ff3108ec081" satisfied condition "success or failure" Mar 30 21:35:20.119: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9a0500c2-c30e-44c2-bb25-8ff3108ec081 container client-container: STEP: delete the pod Mar 30 21:35:20.144: INFO: Waiting for pod downwardapi-volume-9a0500c2-c30e-44c2-bb25-8ff3108ec081 to disappear Mar 30 21:35:20.164: INFO: Pod downwardapi-volume-9a0500c2-c30e-44c2-bb25-8ff3108ec081 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:35:20.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3883" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1792,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:35:20.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:35:24.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-326" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1805,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:35:24.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:35:24.337: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:35:25.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9546" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":100,"skipped":1826,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:35:25.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0330 21:36:05.504659 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 30 21:36:05.504: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:36:05.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3918" for this suite. • [SLOW TEST:40.133 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":101,"skipped":1838,"failed":0} [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:36:05.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:36:05.555: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-7290 I0330 21:36:05.598169 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7290, replica count: 1 I0330 21:36:06.648582 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0330 21:36:07.648811 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0330 21:36:08.649016 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 30 21:36:08.786: INFO: Created: latency-svc-xf7fc Mar 30 21:36:08.798: INFO: Got endpoints: latency-svc-xf7fc [49.465223ms] Mar 30 21:36:08.828: INFO: Created: latency-svc-hm5x6 Mar 30 21:36:08.841: INFO: Got endpoints: latency-svc-hm5x6 [42.927026ms] Mar 30 21:36:08.860: INFO: Created: latency-svc-rzqjd Mar 30 21:36:08.871: INFO: Got endpoints: latency-svc-rzqjd [72.95896ms] Mar 30 21:36:08.918: INFO: Created: latency-svc-7d8r5 Mar 30 21:36:08.932: INFO: Got endpoints: latency-svc-7d8r5 [133.337329ms] Mar 30 21:36:08.954: INFO: Created: latency-svc-kjjfh Mar 30 21:36:08.967: INFO: Got endpoints: latency-svc-kjjfh [168.741031ms] Mar 30 21:36:08.990: INFO: Created: latency-svc-pqjkt Mar 30 21:36:09.036: INFO: Got endpoints: latency-svc-pqjkt [237.635934ms] Mar 30 21:36:09.056: INFO: Created: latency-svc-z5szk Mar 30 21:36:09.070: INFO: Got endpoints: latency-svc-z5szk [271.159534ms] Mar 30 21:36:09.092: INFO: Created: latency-svc-l4ksw Mar 30 21:36:09.107: INFO: Got endpoints: latency-svc-l4ksw [308.068945ms] Mar 30 21:36:09.128: INFO: Created: latency-svc-m5hbr Mar 30 21:36:09.162: INFO: Got endpoints: latency-svc-m5hbr [363.62695ms] Mar 30 21:36:09.176: INFO: Created: latency-svc-8fhr8 Mar 30 21:36:09.185: INFO: Got endpoints: latency-svc-8fhr8 [386.237568ms] Mar 30 21:36:09.206: INFO: Created: latency-svc-z49dp Mar 30 21:36:09.215: INFO: Got endpoints: latency-svc-z49dp [416.46788ms] Mar 30 21:36:09.238: INFO: Created: latency-svc-z296n Mar 30 21:36:09.250: INFO: Got endpoints: latency-svc-z296n [451.485336ms] Mar 30 21:36:09.294: INFO: Created: latency-svc-nb8gx Mar 30 21:36:09.321: INFO: Created: latency-svc-scjmk Mar 30 21:36:09.321: INFO: Got endpoints: latency-svc-nb8gx [522.156038ms] Mar 30 21:36:09.334: INFO: Got endpoints: latency-svc-scjmk [535.81884ms] Mar 30 21:36:09.362: INFO: Created: latency-svc-glf24 Mar 30 21:36:09.473: INFO: Got endpoints: latency-svc-glf24 [674.235945ms] Mar 30 21:36:09.475: INFO: Created: latency-svc-86jn2 Mar 30 21:36:09.485: INFO: Got endpoints: latency-svc-86jn2 [686.408488ms] Mar 30 21:36:09.506: INFO: Created: latency-svc-rfnfs Mar 30 21:36:09.516: INFO: Got endpoints: latency-svc-rfnfs [674.861434ms] Mar 30 21:36:09.549: INFO: Created: latency-svc-s7s4n Mar 30 21:36:09.565: INFO: Got endpoints: latency-svc-s7s4n [694.019126ms] Mar 30 21:36:09.629: INFO: Created: latency-svc-f5vvf Mar 30 21:36:09.656: INFO: Created: latency-svc-bq6kd Mar 30 21:36:09.656: INFO: Got endpoints: latency-svc-f5vvf [724.185038ms] Mar 30 21:36:09.667: INFO: Got endpoints: latency-svc-bq6kd [699.878556ms] Mar 30 21:36:09.686: INFO: Created: latency-svc-kxj8n Mar 30 21:36:09.704: INFO: Got endpoints: latency-svc-kxj8n [668.028516ms] Mar 30 21:36:09.722: INFO: Created: latency-svc-6fzw2 Mar 30 21:36:09.766: INFO: Got endpoints: latency-svc-6fzw2 [696.721195ms] Mar 30 21:36:09.782: INFO: Created: latency-svc-7wrfp Mar 30 21:36:09.794: INFO: Got endpoints: latency-svc-7wrfp [687.614697ms] Mar 30 21:36:09.811: INFO: Created: latency-svc-46sdw Mar 30 21:36:09.830: INFO: Got endpoints: latency-svc-46sdw [668.279877ms] Mar 30 21:36:09.859: INFO: Created: latency-svc-59cl7 Mar 30 21:36:09.922: INFO: Got endpoints: latency-svc-59cl7 [737.513583ms] Mar 30 21:36:09.925: INFO: Created: latency-svc-p4ckn Mar 30 21:36:09.933: INFO: Got endpoints: latency-svc-p4ckn [717.923532ms] Mar 30 21:36:09.975: INFO: Created: latency-svc-gc7pc Mar 30 21:36:09.987: INFO: Got endpoints: latency-svc-gc7pc [737.082209ms] Mar 30 21:36:10.009: INFO: Created: latency-svc-jr7rs Mar 30 21:36:10.090: INFO: Got endpoints: latency-svc-jr7rs [768.706664ms] Mar 30 21:36:10.092: INFO: Created: latency-svc-fjm82 Mar 30 21:36:10.101: INFO: Got endpoints: latency-svc-fjm82 [766.9112ms] Mar 30 21:36:10.119: INFO: Created: latency-svc-x59l2 Mar 30 21:36:10.132: INFO: Got endpoints: latency-svc-x59l2 [658.884093ms] Mar 30 21:36:10.162: INFO: Created: latency-svc-8qb4d Mar 30 21:36:10.167: INFO: Got endpoints: latency-svc-8qb4d [682.146269ms] Mar 30 21:36:10.228: INFO: Created: latency-svc-dfxg8 Mar 30 21:36:10.231: INFO: Got endpoints: latency-svc-dfxg8 [714.740593ms] Mar 30 21:36:10.262: INFO: Created: latency-svc-fhnnl Mar 30 21:36:10.276: INFO: Got endpoints: latency-svc-fhnnl [710.585363ms] Mar 30 21:36:10.322: INFO: Created: latency-svc-lnhdv Mar 30 21:36:10.359: INFO: Got endpoints: latency-svc-lnhdv [703.460633ms] Mar 30 21:36:10.376: INFO: Created: latency-svc-mgk5g Mar 30 21:36:10.393: INFO: Got endpoints: latency-svc-mgk5g [725.860082ms] Mar 30 21:36:10.425: INFO: Created: latency-svc-rt9vc Mar 30 21:36:10.447: INFO: Got endpoints: latency-svc-rt9vc [743.06118ms] Mar 30 21:36:10.502: INFO: Created: latency-svc-ddp8z Mar 30 21:36:10.518: INFO: Got endpoints: latency-svc-ddp8z [751.185291ms] Mar 30 21:36:10.537: INFO: Created: latency-svc-9qdcb Mar 30 21:36:10.555: INFO: Got endpoints: latency-svc-9qdcb [760.275597ms] Mar 30 21:36:10.574: INFO: Created: latency-svc-vtl27 Mar 30 21:36:10.629: INFO: Got endpoints: latency-svc-vtl27 [798.075904ms] Mar 30 21:36:10.676: INFO: Created: latency-svc-bbmjq Mar 30 21:36:10.686: INFO: Got endpoints: latency-svc-bbmjq [763.498304ms] Mar 30 21:36:10.712: INFO: Created: latency-svc-44566 Mar 30 21:36:10.732: INFO: Got endpoints: latency-svc-44566 [798.970987ms] Mar 30 21:36:10.771: INFO: Created: latency-svc-729x5 Mar 30 21:36:10.783: INFO: Got endpoints: latency-svc-729x5 [795.579209ms] Mar 30 21:36:10.802: INFO: Created: latency-svc-5hgrf Mar 30 21:36:10.819: INFO: Got endpoints: latency-svc-5hgrf [729.597748ms] Mar 30 21:36:10.838: INFO: Created: latency-svc-9jlx8 Mar 30 21:36:10.850: INFO: Got endpoints: latency-svc-9jlx8 [748.178068ms] Mar 30 21:36:10.916: INFO: Created: latency-svc-sz2sm Mar 30 21:36:10.919: INFO: Got endpoints: latency-svc-sz2sm [787.183399ms] Mar 30 21:36:10.946: INFO: Created: latency-svc-vgx66 Mar 30 21:36:10.958: INFO: Got endpoints: latency-svc-vgx66 [790.935013ms] Mar 30 21:36:10.976: INFO: Created: latency-svc-98zcg Mar 30 21:36:10.988: INFO: Got endpoints: latency-svc-98zcg [757.414894ms] Mar 30 21:36:11.006: INFO: Created: latency-svc-prjxh Mar 30 21:36:11.037: INFO: Got endpoints: latency-svc-prjxh [760.425562ms] Mar 30 21:36:11.048: INFO: Created: latency-svc-zvzf2 Mar 30 21:36:11.061: INFO: Got endpoints: latency-svc-zvzf2 [701.628881ms] Mar 30 21:36:11.083: INFO: Created: latency-svc-hvtr7 Mar 30 21:36:11.098: INFO: Got endpoints: latency-svc-hvtr7 [704.477584ms] Mar 30 21:36:11.119: INFO: Created: latency-svc-l27jq Mar 30 21:36:11.134: INFO: Got endpoints: latency-svc-l27jq [686.326752ms] Mar 30 21:36:11.180: INFO: Created: latency-svc-d57jg Mar 30 21:36:11.183: INFO: Got endpoints: latency-svc-d57jg [665.410044ms] Mar 30 21:36:11.210: INFO: Created: latency-svc-lg2l2 Mar 30 21:36:11.224: INFO: Got endpoints: latency-svc-lg2l2 [669.556233ms] Mar 30 21:36:11.252: INFO: Created: latency-svc-dzbz6 Mar 30 21:36:11.317: INFO: Got endpoints: latency-svc-dzbz6 [688.685748ms] Mar 30 21:36:11.329: INFO: Created: latency-svc-tv6m5 Mar 30 21:36:11.339: INFO: Got endpoints: latency-svc-tv6m5 [652.785184ms] Mar 30 21:36:11.359: INFO: Created: latency-svc-2l8b5 Mar 30 21:36:11.369: INFO: Got endpoints: latency-svc-2l8b5 [636.685449ms] Mar 30 21:36:11.390: INFO: Created: latency-svc-6c7sn Mar 30 21:36:11.406: INFO: Got endpoints: latency-svc-6c7sn [622.701163ms] Mar 30 21:36:11.451: INFO: Created: latency-svc-vpqsx Mar 30 21:36:11.491: INFO: Created: latency-svc-r429s Mar 30 21:36:11.491: INFO: Got endpoints: latency-svc-vpqsx [671.940281ms] Mar 30 21:36:11.514: INFO: Got endpoints: latency-svc-r429s [664.193681ms] Mar 30 21:36:11.599: INFO: Created: latency-svc-lltqb Mar 30 21:36:11.603: INFO: Got endpoints: latency-svc-lltqb [683.93376ms] Mar 30 21:36:11.637: INFO: Created: latency-svc-dbsnl Mar 30 21:36:11.652: INFO: Got endpoints: latency-svc-dbsnl [694.124863ms] Mar 30 21:36:11.672: INFO: Created: latency-svc-8qksb Mar 30 21:36:11.689: INFO: Got endpoints: latency-svc-8qksb [700.489972ms] Mar 30 21:36:11.731: INFO: Created: latency-svc-s9rfd Mar 30 21:36:11.761: INFO: Got endpoints: latency-svc-s9rfd [724.662845ms] Mar 30 21:36:11.792: INFO: Created: latency-svc-86v8z Mar 30 21:36:11.810: INFO: Got endpoints: latency-svc-86v8z [748.327219ms] Mar 30 21:36:11.880: INFO: Created: latency-svc-jm6g9 Mar 30 21:36:11.883: INFO: Got endpoints: latency-svc-jm6g9 [785.526315ms] Mar 30 21:36:11.930: INFO: Created: latency-svc-mnlws Mar 30 21:36:11.942: INFO: Got endpoints: latency-svc-mnlws [808.092465ms] Mar 30 21:36:11.966: INFO: Created: latency-svc-v2746 Mar 30 21:36:12.030: INFO: Got endpoints: latency-svc-v2746 [846.91866ms] Mar 30 21:36:12.043: INFO: Created: latency-svc-8g78g Mar 30 21:36:12.057: INFO: Got endpoints: latency-svc-8g78g [832.233127ms] Mar 30 21:36:12.080: INFO: Created: latency-svc-h4txj Mar 30 21:36:12.092: INFO: Got endpoints: latency-svc-h4txj [774.762897ms] Mar 30 21:36:12.110: INFO: Created: latency-svc-2mxch Mar 30 21:36:12.149: INFO: Got endpoints: latency-svc-2mxch [810.324429ms] Mar 30 21:36:12.169: INFO: Created: latency-svc-mlt4b Mar 30 21:36:12.193: INFO: Got endpoints: latency-svc-mlt4b [824.203644ms] Mar 30 21:36:12.218: INFO: Created: latency-svc-splt6 Mar 30 21:36:12.231: INFO: Got endpoints: latency-svc-splt6 [825.617518ms] Mar 30 21:36:12.248: INFO: Created: latency-svc-qjj7d Mar 30 21:36:12.281: INFO: Got endpoints: latency-svc-qjj7d [789.827861ms] Mar 30 21:36:12.305: INFO: Created: latency-svc-g5jj5 Mar 30 21:36:12.316: INFO: Got endpoints: latency-svc-g5jj5 [801.717524ms] Mar 30 21:36:12.355: INFO: Created: latency-svc-vzl7v Mar 30 21:36:12.379: INFO: Got endpoints: latency-svc-vzl7v [776.171327ms] Mar 30 21:36:12.427: INFO: Created: latency-svc-q2j9f Mar 30 21:36:12.460: INFO: Got endpoints: latency-svc-q2j9f [807.787189ms] Mar 30 21:36:12.512: INFO: Created: latency-svc-czdws Mar 30 21:36:12.563: INFO: Got endpoints: latency-svc-czdws [873.684278ms] Mar 30 21:36:12.726: INFO: Created: latency-svc-jcs98 Mar 30 21:36:12.749: INFO: Got endpoints: latency-svc-jcs98 [987.850545ms] Mar 30 21:36:12.893: INFO: Created: latency-svc-59krj Mar 30 21:36:12.899: INFO: Got endpoints: latency-svc-59krj [1.088937306s] Mar 30 21:36:12.968: INFO: Created: latency-svc-r4bqt Mar 30 21:36:13.187: INFO: Got endpoints: latency-svc-r4bqt [1.303637346s] Mar 30 21:36:13.256: INFO: Created: latency-svc-6ng77 Mar 30 21:36:13.277: INFO: Got endpoints: latency-svc-6ng77 [1.335065375s] Mar 30 21:36:13.539: INFO: Created: latency-svc-swfvh Mar 30 21:36:13.635: INFO: Got endpoints: latency-svc-swfvh [1.604491032s] Mar 30 21:36:13.688: INFO: Created: latency-svc-28jmr Mar 30 21:36:13.803: INFO: Got endpoints: latency-svc-28jmr [1.74613747s] Mar 30 21:36:13.846: INFO: Created: latency-svc-fxrt9 Mar 30 21:36:13.885: INFO: Got endpoints: latency-svc-fxrt9 [1.792201881s] Mar 30 21:36:14.206: INFO: Created: latency-svc-jbx4t Mar 30 21:36:14.219: INFO: Got endpoints: latency-svc-jbx4t [2.069289094s] Mar 30 21:36:14.243: INFO: Created: latency-svc-v82mg Mar 30 21:36:14.260: INFO: Got endpoints: latency-svc-v82mg [2.066625026s] Mar 30 21:36:14.291: INFO: Created: latency-svc-766l5 Mar 30 21:36:14.297: INFO: Got endpoints: latency-svc-766l5 [2.065837496s] Mar 30 21:36:14.503: INFO: Created: latency-svc-9spxg Mar 30 21:36:14.536: INFO: Got endpoints: latency-svc-9spxg [2.254480137s] Mar 30 21:36:14.559: INFO: Created: latency-svc-h9n4q Mar 30 21:36:14.573: INFO: Got endpoints: latency-svc-h9n4q [2.257749874s] Mar 30 21:36:14.721: INFO: Created: latency-svc-zqd4d Mar 30 21:36:14.782: INFO: Got endpoints: latency-svc-zqd4d [2.402953264s] Mar 30 21:36:14.929: INFO: Created: latency-svc-x6kct Mar 30 21:36:14.942: INFO: Got endpoints: latency-svc-x6kct [2.481399415s] Mar 30 21:36:14.975: INFO: Created: latency-svc-rgjhx Mar 30 21:36:15.180: INFO: Got endpoints: latency-svc-rgjhx [2.616761456s] Mar 30 21:36:15.378: INFO: Created: latency-svc-s4xl7 Mar 30 21:36:15.389: INFO: Got endpoints: latency-svc-s4xl7 [2.640211011s] Mar 30 21:36:15.414: INFO: Created: latency-svc-m6dr6 Mar 30 21:36:15.450: INFO: Got endpoints: latency-svc-m6dr6 [2.55157771s] Mar 30 21:36:15.792: INFO: Created: latency-svc-pllxz Mar 30 21:36:16.043: INFO: Got endpoints: latency-svc-pllxz [2.855889254s] Mar 30 21:36:16.134: INFO: Created: latency-svc-bq7wv Mar 30 21:36:16.258: INFO: Got endpoints: latency-svc-bq7wv [2.980976604s] Mar 30 21:36:16.302: INFO: Created: latency-svc-kr7tt Mar 30 21:36:16.326: INFO: Got endpoints: latency-svc-kr7tt [2.690767463s] Mar 30 21:36:16.590: INFO: Created: latency-svc-pbx54 Mar 30 21:36:16.621: INFO: Created: latency-svc-w9cls Mar 30 21:36:16.622: INFO: Got endpoints: latency-svc-pbx54 [2.818686955s] Mar 30 21:36:16.626: INFO: Got endpoints: latency-svc-w9cls [2.74147493s] Mar 30 21:36:16.671: INFO: Created: latency-svc-f6zk9 Mar 30 21:36:16.679: INFO: Got endpoints: latency-svc-f6zk9 [2.460487677s] Mar 30 21:36:16.832: INFO: Created: latency-svc-74wvw Mar 30 21:36:16.885: INFO: Got endpoints: latency-svc-74wvw [2.625543622s] Mar 30 21:36:16.921: INFO: Created: latency-svc-rmv9l Mar 30 21:36:16.965: INFO: Got endpoints: latency-svc-rmv9l [2.667485894s] Mar 30 21:36:16.998: INFO: Created: latency-svc-2f969 Mar 30 21:36:17.011: INFO: Got endpoints: latency-svc-2f969 [2.475348624s] Mar 30 21:36:17.041: INFO: Created: latency-svc-dt675 Mar 30 21:36:17.114: INFO: Got endpoints: latency-svc-dt675 [2.540869882s] Mar 30 21:36:17.149: INFO: Created: latency-svc-6wrjw Mar 30 21:36:17.158: INFO: Got endpoints: latency-svc-6wrjw [2.375835699s] Mar 30 21:36:17.184: INFO: Created: latency-svc-scwkq Mar 30 21:36:17.188: INFO: Got endpoints: latency-svc-scwkq [2.246526005s] Mar 30 21:36:17.214: INFO: Created: latency-svc-n8j74 Mar 30 21:36:17.281: INFO: Got endpoints: latency-svc-n8j74 [2.101738781s] Mar 30 21:36:17.311: INFO: Created: latency-svc-9cqjg Mar 30 21:36:17.328: INFO: Got endpoints: latency-svc-9cqjg [1.938008615s] Mar 30 21:36:17.365: INFO: Created: latency-svc-tx7wk Mar 30 21:36:17.431: INFO: Got endpoints: latency-svc-tx7wk [1.980927423s] Mar 30 21:36:17.448: INFO: Created: latency-svc-q2kkm Mar 30 21:36:17.453: INFO: Got endpoints: latency-svc-q2kkm [1.410491759s] Mar 30 21:36:17.521: INFO: Created: latency-svc-ghclh Mar 30 21:36:17.557: INFO: Got endpoints: latency-svc-ghclh [1.298753104s] Mar 30 21:36:17.616: INFO: Created: latency-svc-p5r2z Mar 30 21:36:17.712: INFO: Got endpoints: latency-svc-p5r2z [1.386842704s] Mar 30 21:36:17.755: INFO: Created: latency-svc-cqrd4 Mar 30 21:36:17.784: INFO: Got endpoints: latency-svc-cqrd4 [1.162518683s] Mar 30 21:36:17.844: INFO: Created: latency-svc-4jfvs Mar 30 21:36:17.847: INFO: Got endpoints: latency-svc-4jfvs [1.221297556s] Mar 30 21:36:17.898: INFO: Created: latency-svc-fbmmb Mar 30 21:36:17.910: INFO: Got endpoints: latency-svc-fbmmb [1.231275542s] Mar 30 21:36:17.934: INFO: Created: latency-svc-hff9k Mar 30 21:36:17.970: INFO: Got endpoints: latency-svc-hff9k [1.084386423s] Mar 30 21:36:17.988: INFO: Created: latency-svc-5vrr7 Mar 30 21:36:18.007: INFO: Got endpoints: latency-svc-5vrr7 [1.042595209s] Mar 30 21:36:18.037: INFO: Created: latency-svc-g979z Mar 30 21:36:18.049: INFO: Got endpoints: latency-svc-g979z [1.037976036s] Mar 30 21:36:18.168: INFO: Created: latency-svc-mzjcp Mar 30 21:36:18.182: INFO: Got endpoints: latency-svc-mzjcp [1.067277817s] Mar 30 21:36:18.205: INFO: Created: latency-svc-fz8sq Mar 30 21:36:18.218: INFO: Got endpoints: latency-svc-fz8sq [1.059316314s] Mar 30 21:36:18.247: INFO: Created: latency-svc-6m2sg Mar 30 21:36:18.260: INFO: Got endpoints: latency-svc-6m2sg [1.071717534s] Mar 30 21:36:18.318: INFO: Created: latency-svc-8xfg9 Mar 30 21:36:18.333: INFO: Got endpoints: latency-svc-8xfg9 [1.051848183s] Mar 30 21:36:18.384: INFO: Created: latency-svc-nrsvr Mar 30 21:36:18.398: INFO: Got endpoints: latency-svc-nrsvr [1.07061359s] Mar 30 21:36:18.444: INFO: Created: latency-svc-wqftv Mar 30 21:36:18.459: INFO: Got endpoints: latency-svc-wqftv [1.027560681s] Mar 30 21:36:18.505: INFO: Created: latency-svc-zz5zg Mar 30 21:36:18.513: INFO: Got endpoints: latency-svc-zz5zg [1.05945469s] Mar 30 21:36:18.575: INFO: Created: latency-svc-8hkzw Mar 30 21:36:18.585: INFO: Got endpoints: latency-svc-8hkzw [1.028391628s] Mar 30 21:36:18.638: INFO: Created: latency-svc-ffj24 Mar 30 21:36:18.731: INFO: Got endpoints: latency-svc-ffj24 [1.018703656s] Mar 30 21:36:18.731: INFO: Created: latency-svc-bm9b7 Mar 30 21:36:18.735: INFO: Got endpoints: latency-svc-bm9b7 [950.755867ms] Mar 30 21:36:18.756: INFO: Created: latency-svc-pzqwc Mar 30 21:36:18.772: INFO: Got endpoints: latency-svc-pzqwc [924.547193ms] Mar 30 21:36:18.798: INFO: Created: latency-svc-7njrg Mar 30 21:36:18.814: INFO: Got endpoints: latency-svc-7njrg [903.608264ms] Mar 30 21:36:18.894: INFO: Created: latency-svc-sqxnh Mar 30 21:36:18.904: INFO: Got endpoints: latency-svc-sqxnh [934.45675ms] Mar 30 21:36:18.924: INFO: Created: latency-svc-lfv46 Mar 30 21:36:18.935: INFO: Got endpoints: latency-svc-lfv46 [927.191946ms] Mar 30 21:36:18.960: INFO: Created: latency-svc-vjqvv Mar 30 21:36:18.977: INFO: Got endpoints: latency-svc-vjqvv [927.691606ms] Mar 30 21:36:19.048: INFO: Created: latency-svc-kpgs2 Mar 30 21:36:19.055: INFO: Got endpoints: latency-svc-kpgs2 [872.877847ms] Mar 30 21:36:19.080: INFO: Created: latency-svc-2tdzf Mar 30 21:36:19.098: INFO: Got endpoints: latency-svc-2tdzf [879.932247ms] Mar 30 21:36:19.116: INFO: Created: latency-svc-djmv5 Mar 30 21:36:19.127: INFO: Got endpoints: latency-svc-djmv5 [867.331602ms] Mar 30 21:36:19.186: INFO: Created: latency-svc-mq29s Mar 30 21:36:19.194: INFO: Got endpoints: latency-svc-mq29s [860.171766ms] Mar 30 21:36:19.212: INFO: Created: latency-svc-8tfr2 Mar 30 21:36:19.224: INFO: Got endpoints: latency-svc-8tfr2 [825.696979ms] Mar 30 21:36:19.260: INFO: Created: latency-svc-lk4zx Mar 30 21:36:19.272: INFO: Got endpoints: latency-svc-lk4zx [813.553823ms] Mar 30 21:36:19.329: INFO: Created: latency-svc-vhq25 Mar 30 21:36:19.345: INFO: Got endpoints: latency-svc-vhq25 [832.229626ms] Mar 30 21:36:19.380: INFO: Created: latency-svc-7nn26 Mar 30 21:36:19.410: INFO: Got endpoints: latency-svc-7nn26 [824.443559ms] Mar 30 21:36:19.493: INFO: Created: latency-svc-qdk88 Mar 30 21:36:19.501: INFO: Got endpoints: latency-svc-qdk88 [770.298042ms] Mar 30 21:36:19.555: INFO: Created: latency-svc-mdsv5 Mar 30 21:36:19.586: INFO: Got endpoints: latency-svc-mdsv5 [851.034124ms] Mar 30 21:36:19.628: INFO: Created: latency-svc-lp2dl Mar 30 21:36:19.633: INFO: Got endpoints: latency-svc-lp2dl [861.412531ms] Mar 30 21:36:19.662: INFO: Created: latency-svc-jpsbg Mar 30 21:36:19.676: INFO: Got endpoints: latency-svc-jpsbg [861.726065ms] Mar 30 21:36:19.704: INFO: Created: latency-svc-tg6wz Mar 30 21:36:19.725: INFO: Got endpoints: latency-svc-tg6wz [820.149173ms] Mar 30 21:36:19.767: INFO: Created: latency-svc-w4tvl Mar 30 21:36:19.789: INFO: Got endpoints: latency-svc-w4tvl [854.051076ms] Mar 30 21:36:19.836: INFO: Created: latency-svc-w5vvk Mar 30 21:36:19.851: INFO: Got endpoints: latency-svc-w5vvk [873.721719ms] Mar 30 21:36:19.905: INFO: Created: latency-svc-n6jcp Mar 30 21:36:19.908: INFO: Got endpoints: latency-svc-n6jcp [853.326019ms] Mar 30 21:36:19.963: INFO: Created: latency-svc-g89mk Mar 30 21:36:19.987: INFO: Got endpoints: latency-svc-g89mk [888.962421ms] Mar 30 21:36:20.036: INFO: Created: latency-svc-2c48j Mar 30 21:36:20.055: INFO: Got endpoints: latency-svc-2c48j [927.909606ms] Mar 30 21:36:20.106: INFO: Created: latency-svc-r6h2v Mar 30 21:36:20.121: INFO: Got endpoints: latency-svc-r6h2v [927.684844ms] Mar 30 21:36:20.174: INFO: Created: latency-svc-fzs97 Mar 30 21:36:20.177: INFO: Got endpoints: latency-svc-fzs97 [952.643143ms] Mar 30 21:36:20.220: INFO: Created: latency-svc-qxtrt Mar 30 21:36:20.257: INFO: Got endpoints: latency-svc-qxtrt [984.435344ms] Mar 30 21:36:20.305: INFO: Created: latency-svc-kc8m6 Mar 30 21:36:20.309: INFO: Got endpoints: latency-svc-kc8m6 [963.569568ms] Mar 30 21:36:20.340: INFO: Created: latency-svc-4fb7r Mar 30 21:36:20.359: INFO: Got endpoints: latency-svc-4fb7r [949.339833ms] Mar 30 21:36:20.388: INFO: Created: latency-svc-m594h Mar 30 21:36:20.402: INFO: Got endpoints: latency-svc-m594h [900.103184ms] Mar 30 21:36:20.443: INFO: Created: latency-svc-jv94v Mar 30 21:36:20.446: INFO: Got endpoints: latency-svc-jv94v [860.074987ms] Mar 30 21:36:20.489: INFO: Created: latency-svc-7mbfn Mar 30 21:36:20.504: INFO: Got endpoints: latency-svc-7mbfn [870.825844ms] Mar 30 21:36:20.526: INFO: Created: latency-svc-bqd7j Mar 30 21:36:20.534: INFO: Got endpoints: latency-svc-bqd7j [858.524273ms] Mar 30 21:36:20.581: INFO: Created: latency-svc-4t26g Mar 30 21:36:20.584: INFO: Got endpoints: latency-svc-4t26g [859.46752ms] Mar 30 21:36:20.610: INFO: Created: latency-svc-vrbwk Mar 30 21:36:20.625: INFO: Got endpoints: latency-svc-vrbwk [836.619032ms] Mar 30 21:36:20.652: INFO: Created: latency-svc-k6dzv Mar 30 21:36:20.719: INFO: Got endpoints: latency-svc-k6dzv [868.314957ms] Mar 30 21:36:20.731: INFO: Created: latency-svc-4mpr5 Mar 30 21:36:20.739: INFO: Got endpoints: latency-svc-4mpr5 [831.420927ms] Mar 30 21:36:20.766: INFO: Created: latency-svc-95l8b Mar 30 21:36:20.776: INFO: Got endpoints: latency-svc-95l8b [788.866025ms] Mar 30 21:36:20.795: INFO: Created: latency-svc-rmdt8 Mar 30 21:36:20.806: INFO: Got endpoints: latency-svc-rmdt8 [750.920252ms] Mar 30 21:36:20.850: INFO: Created: latency-svc-m8x25 Mar 30 21:36:20.854: INFO: Got endpoints: latency-svc-m8x25 [732.810115ms] Mar 30 21:36:20.880: INFO: Created: latency-svc-bzf6t Mar 30 21:36:20.903: INFO: Got endpoints: latency-svc-bzf6t [726.248481ms] Mar 30 21:36:20.928: INFO: Created: latency-svc-jzvj8 Mar 30 21:36:20.939: INFO: Got endpoints: latency-svc-jzvj8 [682.274084ms] Mar 30 21:36:20.988: INFO: Created: latency-svc-shlkt Mar 30 21:36:20.995: INFO: Got endpoints: latency-svc-shlkt [686.360081ms] Mar 30 21:36:21.036: INFO: Created: latency-svc-trl7r Mar 30 21:36:21.066: INFO: Got endpoints: latency-svc-trl7r [706.953331ms] Mar 30 21:36:21.132: INFO: Created: latency-svc-8dzhv Mar 30 21:36:21.162: INFO: Got endpoints: latency-svc-8dzhv [760.732521ms] Mar 30 21:36:21.163: INFO: Created: latency-svc-zqvkc Mar 30 21:36:21.174: INFO: Got endpoints: latency-svc-zqvkc [728.094377ms] Mar 30 21:36:21.198: INFO: Created: latency-svc-m5gb9 Mar 30 21:36:21.210: INFO: Got endpoints: latency-svc-m5gb9 [705.997502ms] Mar 30 21:36:21.269: INFO: Created: latency-svc-cgr2x Mar 30 21:36:21.272: INFO: Got endpoints: latency-svc-cgr2x [737.646335ms] Mar 30 21:36:21.319: INFO: Created: latency-svc-xk9jg Mar 30 21:36:21.331: INFO: Got endpoints: latency-svc-xk9jg [746.743987ms] Mar 30 21:36:21.353: INFO: Created: latency-svc-bfcrg Mar 30 21:36:21.367: INFO: Got endpoints: latency-svc-bfcrg [741.630517ms] Mar 30 21:36:21.426: INFO: Created: latency-svc-hm8fv Mar 30 21:36:21.439: INFO: Got endpoints: latency-svc-hm8fv [720.201174ms] Mar 30 21:36:21.462: INFO: Created: latency-svc-7hqtd Mar 30 21:36:21.476: INFO: Got endpoints: latency-svc-7hqtd [736.247825ms] Mar 30 21:36:21.551: INFO: Created: latency-svc-dd72k Mar 30 21:36:21.563: INFO: Got endpoints: latency-svc-dd72k [787.29267ms] Mar 30 21:36:21.600: INFO: Created: latency-svc-dzxx5 Mar 30 21:36:21.614: INFO: Got endpoints: latency-svc-dzxx5 [807.718338ms] Mar 30 21:36:21.695: INFO: Created: latency-svc-tx9z8 Mar 30 21:36:21.699: INFO: Got endpoints: latency-svc-tx9z8 [844.626474ms] Mar 30 21:36:21.726: INFO: Created: latency-svc-km262 Mar 30 21:36:21.741: INFO: Got endpoints: latency-svc-km262 [838.033305ms] Mar 30 21:36:21.762: INFO: Created: latency-svc-j2667 Mar 30 21:36:21.778: INFO: Got endpoints: latency-svc-j2667 [838.842327ms] Mar 30 21:36:21.839: INFO: Created: latency-svc-4qkqr Mar 30 21:36:21.841: INFO: Got endpoints: latency-svc-4qkqr [845.883693ms] Mar 30 21:36:21.863: INFO: Created: latency-svc-cfcfg Mar 30 21:36:21.874: INFO: Got endpoints: latency-svc-cfcfg [807.763905ms] Mar 30 21:36:21.906: INFO: Created: latency-svc-6hbvf Mar 30 21:36:21.922: INFO: Got endpoints: latency-svc-6hbvf [759.562717ms] Mar 30 21:36:21.976: INFO: Created: latency-svc-zf2fq Mar 30 21:36:21.996: INFO: Got endpoints: latency-svc-zf2fq [821.804836ms] Mar 30 21:36:22.002: INFO: Created: latency-svc-b2l2b Mar 30 21:36:22.019: INFO: Got endpoints: latency-svc-b2l2b [808.211289ms] Mar 30 21:36:22.038: INFO: Created: latency-svc-nwvdg Mar 30 21:36:22.049: INFO: Got endpoints: latency-svc-nwvdg [776.423441ms] Mar 30 21:36:22.156: INFO: Created: latency-svc-42x58 Mar 30 21:36:22.158: INFO: Got endpoints: latency-svc-42x58 [827.228729ms] Mar 30 21:36:22.195: INFO: Created: latency-svc-hnpmm Mar 30 21:36:22.211: INFO: Got endpoints: latency-svc-hnpmm [844.271172ms] Mar 30 21:36:22.236: INFO: Created: latency-svc-pl5vb Mar 30 21:36:22.248: INFO: Got endpoints: latency-svc-pl5vb [808.283419ms] Mar 30 21:36:22.311: INFO: Created: latency-svc-b88v6 Mar 30 21:36:22.320: INFO: Got endpoints: latency-svc-b88v6 [844.172834ms] Mar 30 21:36:22.344: INFO: Created: latency-svc-h6m7p Mar 30 21:36:22.356: INFO: Got endpoints: latency-svc-h6m7p [793.115021ms] Mar 30 21:36:22.380: INFO: Created: latency-svc-8cj56 Mar 30 21:36:22.392: INFO: Got endpoints: latency-svc-8cj56 [778.064952ms] Mar 30 21:36:22.455: INFO: Created: latency-svc-r5xnn Mar 30 21:36:22.481: INFO: Got endpoints: latency-svc-r5xnn [782.55086ms] Mar 30 21:36:22.482: INFO: Created: latency-svc-rlgd5 Mar 30 21:36:22.495: INFO: Got endpoints: latency-svc-rlgd5 [754.098823ms] Mar 30 21:36:22.524: INFO: Created: latency-svc-t256d Mar 30 21:36:22.548: INFO: Got endpoints: latency-svc-t256d [769.823637ms] Mar 30 21:36:22.599: INFO: Created: latency-svc-ssm2z Mar 30 21:36:22.602: INFO: Got endpoints: latency-svc-ssm2z [760.654121ms] Mar 30 21:36:22.625: INFO: Created: latency-svc-hjfbw Mar 30 21:36:22.640: INFO: Got endpoints: latency-svc-hjfbw [765.97058ms] Mar 30 21:36:22.640: INFO: Latencies: [42.927026ms 72.95896ms 133.337329ms 168.741031ms 237.635934ms 271.159534ms 308.068945ms 363.62695ms 386.237568ms 416.46788ms 451.485336ms 522.156038ms 535.81884ms 622.701163ms 636.685449ms 652.785184ms 658.884093ms 664.193681ms 665.410044ms 668.028516ms 668.279877ms 669.556233ms 671.940281ms 674.235945ms 674.861434ms 682.146269ms 682.274084ms 683.93376ms 686.326752ms 686.360081ms 686.408488ms 687.614697ms 688.685748ms 694.019126ms 694.124863ms 696.721195ms 699.878556ms 700.489972ms 701.628881ms 703.460633ms 704.477584ms 705.997502ms 706.953331ms 710.585363ms 714.740593ms 717.923532ms 720.201174ms 724.185038ms 724.662845ms 725.860082ms 726.248481ms 728.094377ms 729.597748ms 732.810115ms 736.247825ms 737.082209ms 737.513583ms 737.646335ms 741.630517ms 743.06118ms 746.743987ms 748.178068ms 748.327219ms 750.920252ms 751.185291ms 754.098823ms 757.414894ms 759.562717ms 760.275597ms 760.425562ms 760.654121ms 760.732521ms 763.498304ms 765.97058ms 766.9112ms 768.706664ms 769.823637ms 770.298042ms 774.762897ms 776.171327ms 776.423441ms 778.064952ms 782.55086ms 785.526315ms 787.183399ms 787.29267ms 788.866025ms 789.827861ms 790.935013ms 793.115021ms 795.579209ms 798.075904ms 798.970987ms 801.717524ms 807.718338ms 807.763905ms 807.787189ms 808.092465ms 808.211289ms 808.283419ms 810.324429ms 813.553823ms 820.149173ms 821.804836ms 824.203644ms 824.443559ms 825.617518ms 825.696979ms 827.228729ms 831.420927ms 832.229626ms 832.233127ms 836.619032ms 838.033305ms 838.842327ms 844.172834ms 844.271172ms 844.626474ms 845.883693ms 846.91866ms 851.034124ms 853.326019ms 854.051076ms 858.524273ms 859.46752ms 860.074987ms 860.171766ms 861.412531ms 861.726065ms 867.331602ms 868.314957ms 870.825844ms 872.877847ms 873.684278ms 873.721719ms 879.932247ms 888.962421ms 900.103184ms 903.608264ms 924.547193ms 927.191946ms 927.684844ms 927.691606ms 927.909606ms 934.45675ms 949.339833ms 950.755867ms 952.643143ms 963.569568ms 984.435344ms 987.850545ms 1.018703656s 1.027560681s 1.028391628s 1.037976036s 1.042595209s 1.051848183s 1.059316314s 1.05945469s 1.067277817s 1.07061359s 1.071717534s 1.084386423s 1.088937306s 1.162518683s 1.221297556s 1.231275542s 1.298753104s 1.303637346s 1.335065375s 1.386842704s 1.410491759s 1.604491032s 1.74613747s 1.792201881s 1.938008615s 1.980927423s 2.065837496s 2.066625026s 2.069289094s 2.101738781s 2.246526005s 2.254480137s 2.257749874s 2.375835699s 2.402953264s 2.460487677s 2.475348624s 2.481399415s 2.540869882s 2.55157771s 2.616761456s 2.625543622s 2.640211011s 2.667485894s 2.690767463s 2.74147493s 2.818686955s 2.855889254s 2.980976604s] Mar 30 21:36:22.640: INFO: 50 %ile: 810.324429ms Mar 30 21:36:22.640: INFO: 90 %ile: 2.101738781s Mar 30 21:36:22.640: INFO: 99 %ile: 2.855889254s Mar 30 21:36:22.640: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:36:22.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-7290" for this suite. • [SLOW TEST:17.138 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":102,"skipped":1838,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:36:22.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:36:22.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2324" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":103,"skipped":1871,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:36:22.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 30 21:36:23.242: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 30 21:36:25.258: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200983, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200983, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200983, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721200983, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 30 21:36:28.297: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:36:28.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8046" for this suite. STEP: Destroying namespace "webhook-8046-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.140 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":104,"skipped":1885,"failed":0} S ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:36:28.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-14f2760e-e678-4331-99e6-8a5fbd2e1865 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-14f2760e-e678-4331-99e6-8a5fbd2e1865 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:36:35.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5092" for this suite. • [SLOW TEST:6.293 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1886,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:36:35.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-78561ae8-65f8-45f9-8e29-e97460708f75 STEP: Creating a pod to test consume configMaps Mar 30 21:36:35.319: INFO: Waiting up to 5m0s for pod "pod-configmaps-70f2e01e-b4b8-43f4-8ce0-0aceedc3fb30" in namespace "configmap-8694" to be "success or failure" Mar 30 21:36:35.341: INFO: Pod "pod-configmaps-70f2e01e-b4b8-43f4-8ce0-0aceedc3fb30": Phase="Pending", Reason="", readiness=false. Elapsed: 21.15796ms Mar 30 21:36:37.587: INFO: Pod "pod-configmaps-70f2e01e-b4b8-43f4-8ce0-0aceedc3fb30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267861527s Mar 30 21:36:39.605: INFO: Pod "pod-configmaps-70f2e01e-b4b8-43f4-8ce0-0aceedc3fb30": Phase="Running", Reason="", readiness=true. Elapsed: 4.285525643s Mar 30 21:36:41.630: INFO: Pod "pod-configmaps-70f2e01e-b4b8-43f4-8ce0-0aceedc3fb30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.310081992s STEP: Saw pod success Mar 30 21:36:41.630: INFO: Pod "pod-configmaps-70f2e01e-b4b8-43f4-8ce0-0aceedc3fb30" satisfied condition "success or failure" Mar 30 21:36:41.632: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-70f2e01e-b4b8-43f4-8ce0-0aceedc3fb30 container configmap-volume-test: STEP: delete the pod Mar 30 21:36:41.761: INFO: Waiting for pod pod-configmaps-70f2e01e-b4b8-43f4-8ce0-0aceedc3fb30 to disappear Mar 30 21:36:41.764: INFO: Pod pod-configmaps-70f2e01e-b4b8-43f4-8ce0-0aceedc3fb30 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:36:41.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8694" for this suite. • [SLOW TEST:6.696 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1890,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:36:41.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1733 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 30 21:36:41.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-8934' Mar 30 21:36:42.043: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 30 21:36:42.043: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1738 Mar 30 21:36:44.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-8934' Mar 30 21:36:44.432: INFO: stderr: "" Mar 30 21:36:44.432: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:36:44.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8934" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":107,"skipped":1908,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:36:44.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-7663 STEP: creating replication controller nodeport-test in namespace services-7663 I0330 21:36:44.754441 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-7663, replica count: 2 I0330 21:36:47.804900 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0330 21:36:50.805080 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 30 21:36:50.805: INFO: Creating new exec pod Mar 30 21:36:55.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7663 execpodtd9t8 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Mar 30 21:36:56.061: INFO: stderr: "I0330 21:36:55.960660 1037 log.go:172] (0xc000a10630) (0xc000a70000) Create stream\nI0330 21:36:55.960717 1037 log.go:172] (0xc000a10630) (0xc000a70000) Stream added, broadcasting: 1\nI0330 21:36:55.963584 1037 log.go:172] (0xc000a10630) Reply frame received for 1\nI0330 21:36:55.963639 1037 log.go:172] (0xc000a10630) (0xc00062bb80) Create stream\nI0330 21:36:55.963660 1037 log.go:172] (0xc000a10630) (0xc00062bb80) Stream added, broadcasting: 3\nI0330 21:36:55.964738 1037 log.go:172] (0xc000a10630) Reply frame received for 3\nI0330 21:36:55.964787 1037 log.go:172] (0xc000a10630) (0xc000200000) Create stream\nI0330 21:36:55.964805 1037 log.go:172] (0xc000a10630) (0xc000200000) Stream added, broadcasting: 5\nI0330 21:36:55.965864 1037 log.go:172] (0xc000a10630) Reply frame received for 5\nI0330 21:36:56.052668 1037 log.go:172] (0xc000a10630) Data frame received for 5\nI0330 21:36:56.052704 1037 log.go:172] (0xc000200000) (5) Data frame handling\nI0330 21:36:56.052725 1037 log.go:172] (0xc000200000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0330 21:36:56.053433 1037 log.go:172] (0xc000a10630) Data frame received for 5\nI0330 21:36:56.053458 1037 log.go:172] (0xc000200000) (5) Data frame handling\nI0330 21:36:56.053491 1037 log.go:172] (0xc000200000) (5) Data frame sent\nI0330 21:36:56.053514 1037 log.go:172] (0xc000a10630) Data frame received for 5\nI0330 21:36:56.053526 1037 log.go:172] (0xc000200000) (5) Data frame handling\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0330 21:36:56.053769 1037 log.go:172] (0xc000a10630) Data frame received for 3\nI0330 21:36:56.053795 1037 log.go:172] (0xc00062bb80) (3) Data frame handling\nI0330 21:36:56.055851 1037 log.go:172] (0xc000a10630) Data frame received for 1\nI0330 21:36:56.055872 1037 log.go:172] (0xc000a70000) (1) Data frame handling\nI0330 21:36:56.055885 1037 log.go:172] (0xc000a70000) (1) Data frame sent\nI0330 21:36:56.055994 1037 log.go:172] (0xc000a10630) (0xc000a70000) Stream removed, broadcasting: 1\nI0330 21:36:56.056087 1037 log.go:172] (0xc000a10630) Go away received\nI0330 21:36:56.056432 1037 log.go:172] (0xc000a10630) (0xc000a70000) Stream removed, broadcasting: 1\nI0330 21:36:56.056455 1037 log.go:172] (0xc000a10630) (0xc00062bb80) Stream removed, broadcasting: 3\nI0330 21:36:56.056467 1037 log.go:172] (0xc000a10630) (0xc000200000) Stream removed, broadcasting: 5\n" Mar 30 21:36:56.061: INFO: stdout: "" Mar 30 21:36:56.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7663 execpodtd9t8 -- /bin/sh -x -c nc -zv -t -w 2 10.105.69.243 80' Mar 30 21:36:56.278: INFO: stderr: "I0330 21:36:56.200707 1060 log.go:172] (0xc000a069a0) (0xc000a9a000) Create stream\nI0330 21:36:56.200786 1060 log.go:172] (0xc000a069a0) (0xc000a9a000) Stream added, broadcasting: 1\nI0330 21:36:56.209250 1060 log.go:172] (0xc000a069a0) Reply frame received for 1\nI0330 21:36:56.209306 1060 log.go:172] (0xc000a069a0) (0xc0006c8000) Create stream\nI0330 21:36:56.209318 1060 log.go:172] (0xc000a069a0) (0xc0006c8000) Stream added, broadcasting: 3\nI0330 21:36:56.211400 1060 log.go:172] (0xc000a069a0) Reply frame received for 3\nI0330 21:36:56.211431 1060 log.go:172] (0xc000a069a0) (0xc0006c80a0) Create stream\nI0330 21:36:56.211441 1060 log.go:172] (0xc000a069a0) (0xc0006c80a0) Stream added, broadcasting: 5\nI0330 21:36:56.212367 1060 log.go:172] (0xc000a069a0) Reply frame received for 5\nI0330 21:36:56.273705 1060 log.go:172] (0xc000a069a0) Data frame received for 5\nI0330 21:36:56.273742 1060 log.go:172] (0xc0006c80a0) (5) Data frame handling\nI0330 21:36:56.273762 1060 log.go:172] (0xc0006c80a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.105.69.243 80\nConnection to 10.105.69.243 80 port [tcp/http] succeeded!\nI0330 21:36:56.273774 1060 log.go:172] (0xc000a069a0) Data frame received for 5\nI0330 21:36:56.273815 1060 log.go:172] (0xc0006c80a0) (5) Data frame handling\nI0330 21:36:56.273854 1060 log.go:172] (0xc000a069a0) Data frame received for 3\nI0330 21:36:56.273887 1060 log.go:172] (0xc0006c8000) (3) Data frame handling\nI0330 21:36:56.275087 1060 log.go:172] (0xc000a069a0) Data frame received for 1\nI0330 21:36:56.275215 1060 log.go:172] (0xc000a9a000) (1) Data frame handling\nI0330 21:36:56.275246 1060 log.go:172] (0xc000a9a000) (1) Data frame sent\nI0330 21:36:56.275262 1060 log.go:172] (0xc000a069a0) (0xc000a9a000) Stream removed, broadcasting: 1\nI0330 21:36:56.275289 1060 log.go:172] (0xc000a069a0) Go away received\nI0330 21:36:56.275681 1060 log.go:172] (0xc000a069a0) (0xc000a9a000) Stream removed, broadcasting: 1\nI0330 21:36:56.275703 1060 log.go:172] (0xc000a069a0) (0xc0006c8000) Stream removed, broadcasting: 3\nI0330 21:36:56.275716 1060 log.go:172] (0xc000a069a0) (0xc0006c80a0) Stream removed, broadcasting: 5\n" Mar 30 21:36:56.278: INFO: stdout: "" Mar 30 21:36:56.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7663 execpodtd9t8 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31222' Mar 30 21:36:56.493: INFO: stderr: "I0330 21:36:56.411597 1082 log.go:172] (0xc0005866e0) (0xc000928000) Create stream\nI0330 21:36:56.411655 1082 log.go:172] (0xc0005866e0) (0xc000928000) Stream added, broadcasting: 1\nI0330 21:36:56.414502 1082 log.go:172] (0xc0005866e0) Reply frame received for 1\nI0330 21:36:56.414540 1082 log.go:172] (0xc0005866e0) (0xc0006a1ae0) Create stream\nI0330 21:36:56.414549 1082 log.go:172] (0xc0005866e0) (0xc0006a1ae0) Stream added, broadcasting: 3\nI0330 21:36:56.415749 1082 log.go:172] (0xc0005866e0) Reply frame received for 3\nI0330 21:36:56.415799 1082 log.go:172] (0xc0005866e0) (0xc00094e000) Create stream\nI0330 21:36:56.415816 1082 log.go:172] (0xc0005866e0) (0xc00094e000) Stream added, broadcasting: 5\nI0330 21:36:56.416796 1082 log.go:172] (0xc0005866e0) Reply frame received for 5\nI0330 21:36:56.488818 1082 log.go:172] (0xc0005866e0) Data frame received for 3\nI0330 21:36:56.488858 1082 log.go:172] (0xc0006a1ae0) (3) Data frame handling\nI0330 21:36:56.488913 1082 log.go:172] (0xc0005866e0) Data frame received for 5\nI0330 21:36:56.488955 1082 log.go:172] (0xc00094e000) (5) Data frame handling\nI0330 21:36:56.489000 1082 log.go:172] (0xc00094e000) (5) Data frame sent\nI0330 21:36:56.489024 1082 log.go:172] (0xc0005866e0) Data frame received for 5\nI0330 21:36:56.489042 1082 log.go:172] (0xc00094e000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 31222\nConnection to 172.17.0.10 31222 port [tcp/31222] succeeded!\nI0330 21:36:56.490403 1082 log.go:172] (0xc0005866e0) Data frame received for 1\nI0330 21:36:56.490436 1082 log.go:172] (0xc000928000) (1) Data frame handling\nI0330 21:36:56.490471 1082 log.go:172] (0xc000928000) (1) Data frame sent\nI0330 21:36:56.490499 1082 log.go:172] (0xc0005866e0) (0xc000928000) Stream removed, broadcasting: 1\nI0330 21:36:56.490532 1082 log.go:172] (0xc0005866e0) Go away received\nI0330 21:36:56.490763 1082 log.go:172] (0xc0005866e0) (0xc000928000) Stream removed, broadcasting: 1\nI0330 21:36:56.490776 1082 log.go:172] (0xc0005866e0) (0xc0006a1ae0) Stream removed, broadcasting: 3\nI0330 21:36:56.490781 1082 log.go:172] (0xc0005866e0) (0xc00094e000) Stream removed, broadcasting: 5\n" Mar 30 21:36:56.493: INFO: stdout: "" Mar 30 21:36:56.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7663 execpodtd9t8 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31222' Mar 30 21:36:56.670: INFO: stderr: "I0330 21:36:56.613584 1105 log.go:172] (0xc000102f20) (0xc0003ac1e0) Create stream\nI0330 21:36:56.613629 1105 log.go:172] (0xc000102f20) (0xc0003ac1e0) Stream added, broadcasting: 1\nI0330 21:36:56.615531 1105 log.go:172] (0xc000102f20) Reply frame received for 1\nI0330 21:36:56.615577 1105 log.go:172] (0xc000102f20) (0xc0003ac280) Create stream\nI0330 21:36:56.615594 1105 log.go:172] (0xc000102f20) (0xc0003ac280) Stream added, broadcasting: 3\nI0330 21:36:56.616305 1105 log.go:172] (0xc000102f20) Reply frame received for 3\nI0330 21:36:56.616347 1105 log.go:172] (0xc000102f20) (0xc000590be0) Create stream\nI0330 21:36:56.616360 1105 log.go:172] (0xc000102f20) (0xc000590be0) Stream added, broadcasting: 5\nI0330 21:36:56.617030 1105 log.go:172] (0xc000102f20) Reply frame received for 5\nI0330 21:36:56.664340 1105 log.go:172] (0xc000102f20) Data frame received for 3\nI0330 21:36:56.664362 1105 log.go:172] (0xc0003ac280) (3) Data frame handling\nI0330 21:36:56.664400 1105 log.go:172] (0xc000102f20) Data frame received for 5\nI0330 21:36:56.664437 1105 log.go:172] (0xc000590be0) (5) Data frame handling\nI0330 21:36:56.664475 1105 log.go:172] (0xc000590be0) (5) Data frame sent\nI0330 21:36:56.664498 1105 log.go:172] (0xc000102f20) Data frame received for 5\nI0330 21:36:56.664519 1105 log.go:172] (0xc000590be0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 31222\nConnection to 172.17.0.8 31222 port [tcp/31222] succeeded!\nI0330 21:36:56.666273 1105 log.go:172] (0xc000102f20) Data frame received for 1\nI0330 21:36:56.666302 1105 log.go:172] (0xc0003ac1e0) (1) Data frame handling\nI0330 21:36:56.666314 1105 log.go:172] (0xc0003ac1e0) (1) Data frame sent\nI0330 21:36:56.666330 1105 log.go:172] (0xc000102f20) (0xc0003ac1e0) Stream removed, broadcasting: 1\nI0330 21:36:56.666353 1105 log.go:172] (0xc000102f20) Go away received\nI0330 21:36:56.666721 1105 log.go:172] (0xc000102f20) (0xc0003ac1e0) Stream removed, broadcasting: 1\nI0330 21:36:56.666744 1105 log.go:172] (0xc000102f20) (0xc0003ac280) Stream removed, broadcasting: 3\nI0330 21:36:56.666756 1105 log.go:172] (0xc000102f20) (0xc000590be0) Stream removed, broadcasting: 5\n" Mar 30 21:36:56.670: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:36:56.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7663" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.182 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":108,"skipped":1922,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:36:56.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 30 21:37:04.827: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 30 21:37:04.944: INFO: Pod pod-with-poststart-http-hook still exists Mar 30 21:37:06.944: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 30 21:37:06.949: INFO: Pod pod-with-poststart-http-hook still exists Mar 30 21:37:08.944: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 30 21:37:08.949: INFO: Pod pod-with-poststart-http-hook still exists Mar 30 21:37:10.944: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 30 21:37:10.949: INFO: Pod pod-with-poststart-http-hook still exists Mar 30 21:37:12.944: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 30 21:37:12.948: INFO: Pod pod-with-poststart-http-hook still exists Mar 30 21:37:14.944: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 30 21:37:14.949: INFO: Pod pod-with-poststart-http-hook still exists Mar 30 21:37:16.944: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 30 21:37:16.948: INFO: Pod pod-with-poststart-http-hook still exists Mar 30 21:37:18.944: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 30 21:37:18.959: INFO: Pod pod-with-poststart-http-hook still exists Mar 30 21:37:20.944: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Mar 30 21:37:20.965: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:37:20.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9216" for this suite. • [SLOW TEST:24.295 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1924,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:37:20.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 30 21:37:21.563: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 30 21:37:23.573: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201041, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201041, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201041, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201041, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 30 21:37:26.603: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:37:26.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5190" for this suite. STEP: Destroying namespace "webhook-5190-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.910 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":110,"skipped":1938,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:37:26.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 30 21:37:27.636: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 30 21:37:29.647: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201047, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201047, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201047, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201047, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 30 21:37:32.678: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:37:32.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:37:33.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9935" for this suite. STEP: Destroying namespace "webhook-9935-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.143 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":111,"skipped":1960,"failed":0} [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:37:34.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Mar 30 21:37:40.674: INFO: Successfully updated pod "adopt-release-h9pfx" STEP: Checking that the Job readopts the Pod Mar 30 21:37:40.674: INFO: Waiting up to 15m0s for pod "adopt-release-h9pfx" in namespace "job-9416" to be "adopted" Mar 30 21:37:40.678: INFO: Pod "adopt-release-h9pfx": Phase="Running", Reason="", readiness=true. Elapsed: 3.261047ms Mar 30 21:37:42.682: INFO: Pod "adopt-release-h9pfx": Phase="Running", Reason="", readiness=true. Elapsed: 2.007680719s Mar 30 21:37:42.682: INFO: Pod "adopt-release-h9pfx" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Mar 30 21:37:43.191: INFO: Successfully updated pod "adopt-release-h9pfx" STEP: Checking that the Job releases the Pod Mar 30 21:37:43.191: INFO: Waiting up to 15m0s for pod "adopt-release-h9pfx" in namespace "job-9416" to be "released" Mar 30 21:37:43.205: INFO: Pod "adopt-release-h9pfx": Phase="Running", Reason="", readiness=true. Elapsed: 13.2529ms Mar 30 21:37:43.205: INFO: Pod "adopt-release-h9pfx" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:37:43.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9416" for this suite. • [SLOW TEST:9.260 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":112,"skipped":1960,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:37:43.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Mar 30 21:37:43.407: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:37:57.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1042" for this suite. • [SLOW TEST:14.587 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":113,"skipped":1977,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:37:57.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 30 21:38:02.474: INFO: Successfully updated pod "annotationupdate2387d338-377c-49c9-9ff4-1fa7c2bd28c9" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:38:04.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6207" for this suite. • [SLOW TEST:6.650 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":114,"skipped":1986,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:38:04.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 30 21:38:04.628: INFO: Waiting up to 5m0s for pod "pod-b9f900a6-2f5b-41b0-bc4a-9acb0a2de0ac" in namespace "emptydir-423" to be "success or failure" Mar 30 21:38:04.630: INFO: Pod "pod-b9f900a6-2f5b-41b0-bc4a-9acb0a2de0ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186497ms Mar 30 21:38:06.635: INFO: Pod "pod-b9f900a6-2f5b-41b0-bc4a-9acb0a2de0ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006745367s Mar 30 21:38:08.639: INFO: Pod "pod-b9f900a6-2f5b-41b0-bc4a-9acb0a2de0ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011060564s STEP: Saw pod success Mar 30 21:38:08.639: INFO: Pod "pod-b9f900a6-2f5b-41b0-bc4a-9acb0a2de0ac" satisfied condition "success or failure" Mar 30 21:38:08.642: INFO: Trying to get logs from node jerma-worker pod pod-b9f900a6-2f5b-41b0-bc4a-9acb0a2de0ac container test-container: STEP: delete the pod Mar 30 21:38:08.684: INFO: Waiting for pod pod-b9f900a6-2f5b-41b0-bc4a-9acb0a2de0ac to disappear Mar 30 21:38:08.696: INFO: Pod pod-b9f900a6-2f5b-41b0-bc4a-9acb0a2de0ac no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:38:08.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-423" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1987,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:38:08.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 30 21:38:08.816: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 21:38:08.838: INFO: Number of nodes with available pods: 0 Mar 30 21:38:08.838: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:38:09.843: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 21:38:09.991: INFO: Number of nodes with available pods: 0 Mar 30 21:38:09.991: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:38:11.093: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 21:38:11.096: INFO: Number of nodes with available pods: 0 Mar 30 21:38:11.096: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:38:11.912: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 21:38:11.916: INFO: Number of nodes with available pods: 0 Mar 30 21:38:11.916: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:38:12.843: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 21:38:12.847: INFO: Number of nodes with available pods: 1 Mar 30 21:38:12.847: INFO: Node jerma-worker2 is running more than one daemon pod Mar 30 21:38:13.842: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 21:38:13.846: INFO: Number of nodes with available pods: 2 Mar 30 21:38:13.846: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Mar 30 21:38:13.906: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 21:38:13.918: INFO: Number of nodes with available pods: 1 Mar 30 21:38:13.918: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:38:14.921: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 21:38:14.923: INFO: Number of nodes with available pods: 1 Mar 30 21:38:14.923: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:38:15.942: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 21:38:15.945: INFO: Number of nodes with available pods: 1 Mar 30 21:38:15.945: INFO: Node jerma-worker is running more than one daemon pod Mar 30 21:38:16.923: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 21:38:16.927: INFO: Number of nodes with available pods: 2 Mar 30 21:38:16.927: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4860, will wait for the garbage collector to delete the pods Mar 30 21:38:16.992: INFO: Deleting DaemonSet.extensions daemon-set took: 6.267191ms Mar 30 21:38:17.292: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.26377ms Mar 30 21:38:21.605: INFO: Number of nodes with available pods: 0 Mar 30 21:38:21.605: INFO: Number of running nodes: 0, number of available pods: 0 Mar 30 21:38:21.608: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4860/daemonsets","resourceVersion":"4058794"},"items":null} Mar 30 21:38:21.609: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4860/pods","resourceVersion":"4058794"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:38:21.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4860" for this suite. • [SLOW TEST:12.951 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":116,"skipped":1991,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:38:21.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Mar 30 21:38:22.007: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Mar 30 21:38:24.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201102, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201102, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201102, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201101, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 30 21:38:27.073: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:38:27.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:38:28.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6352" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.601 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":117,"skipped":2002,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:38:28.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-48182662-a134-416e-8ea7-91996a0380b6 STEP: Creating a pod to test consume configMaps Mar 30 21:38:28.561: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5ef8e159-84aa-445f-8dd9-327f4914a61b" in namespace "projected-567" to be "success or failure" Mar 30 21:38:28.770: INFO: Pod "pod-projected-configmaps-5ef8e159-84aa-445f-8dd9-327f4914a61b": Phase="Pending", Reason="", readiness=false. Elapsed: 208.200523ms Mar 30 21:38:30.781: INFO: Pod "pod-projected-configmaps-5ef8e159-84aa-445f-8dd9-327f4914a61b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219197164s Mar 30 21:38:32.785: INFO: Pod "pod-projected-configmaps-5ef8e159-84aa-445f-8dd9-327f4914a61b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.223678192s STEP: Saw pod success Mar 30 21:38:32.785: INFO: Pod "pod-projected-configmaps-5ef8e159-84aa-445f-8dd9-327f4914a61b" satisfied condition "success or failure" Mar 30 21:38:32.788: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-5ef8e159-84aa-445f-8dd9-327f4914a61b container projected-configmap-volume-test: STEP: delete the pod Mar 30 21:38:32.828: INFO: Waiting for pod pod-projected-configmaps-5ef8e159-84aa-445f-8dd9-327f4914a61b to disappear Mar 30 21:38:32.831: INFO: Pod pod-projected-configmaps-5ef8e159-84aa-445f-8dd9-327f4914a61b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:38:32.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-567" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":2023,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:38:32.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 30 21:38:32.896: INFO: Waiting up to 5m0s for pod "downward-api-affa470a-38ff-4588-8bec-8f8b9bfa4d00" in namespace "downward-api-8233" to be "success or failure" Mar 30 21:38:32.919: INFO: Pod "downward-api-affa470a-38ff-4588-8bec-8f8b9bfa4d00": Phase="Pending", Reason="", readiness=false. Elapsed: 22.189871ms Mar 30 21:38:34.923: INFO: Pod "downward-api-affa470a-38ff-4588-8bec-8f8b9bfa4d00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026113635s Mar 30 21:38:36.927: INFO: Pod "downward-api-affa470a-38ff-4588-8bec-8f8b9bfa4d00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030265451s STEP: Saw pod success Mar 30 21:38:36.927: INFO: Pod "downward-api-affa470a-38ff-4588-8bec-8f8b9bfa4d00" satisfied condition "success or failure" Mar 30 21:38:36.930: INFO: Trying to get logs from node jerma-worker pod downward-api-affa470a-38ff-4588-8bec-8f8b9bfa4d00 container dapi-container: STEP: delete the pod Mar 30 21:38:36.994: INFO: Waiting for pod downward-api-affa470a-38ff-4588-8bec-8f8b9bfa4d00 to disappear Mar 30 21:38:37.014: INFO: Pod downward-api-affa470a-38ff-4588-8bec-8f8b9bfa4d00 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:38:37.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8233" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":2045,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:38:37.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 30 21:38:37.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44fee0a9-00a2-4c03-82ca-b404769df846" in namespace "projected-3582" to be "success or failure" Mar 30 21:38:37.170: INFO: Pod "downwardapi-volume-44fee0a9-00a2-4c03-82ca-b404769df846": Phase="Pending", Reason="", readiness=false. Elapsed: 9.424712ms Mar 30 21:38:39.175: INFO: Pod "downwardapi-volume-44fee0a9-00a2-4c03-82ca-b404769df846": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013963881s Mar 30 21:38:41.179: INFO: Pod "downwardapi-volume-44fee0a9-00a2-4c03-82ca-b404769df846": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018295182s STEP: Saw pod success Mar 30 21:38:41.179: INFO: Pod "downwardapi-volume-44fee0a9-00a2-4c03-82ca-b404769df846" satisfied condition "success or failure" Mar 30 21:38:41.183: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-44fee0a9-00a2-4c03-82ca-b404769df846 container client-container: STEP: delete the pod Mar 30 21:38:41.243: INFO: Waiting for pod downwardapi-volume-44fee0a9-00a2-4c03-82ca-b404769df846 to disappear Mar 30 21:38:41.248: INFO: Pod downwardapi-volume-44fee0a9-00a2-4c03-82ca-b404769df846 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:38:41.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3582" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":2057,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:38:41.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-mkdn STEP: Creating a pod to test atomic-volume-subpath Mar 30 21:38:41.378: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-mkdn" in namespace "subpath-2478" to be "success or failure" Mar 30 21:38:41.386: INFO: Pod "pod-subpath-test-downwardapi-mkdn": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067678ms Mar 30 21:38:43.397: INFO: Pod "pod-subpath-test-downwardapi-mkdn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019215789s Mar 30 21:38:45.401: INFO: Pod "pod-subpath-test-downwardapi-mkdn": Phase="Running", Reason="", readiness=true. Elapsed: 4.023255624s Mar 30 21:38:47.406: INFO: Pod "pod-subpath-test-downwardapi-mkdn": Phase="Running", Reason="", readiness=true. Elapsed: 6.027608646s Mar 30 21:38:49.410: INFO: Pod "pod-subpath-test-downwardapi-mkdn": Phase="Running", Reason="", readiness=true. Elapsed: 8.031612221s Mar 30 21:38:51.414: INFO: Pod "pod-subpath-test-downwardapi-mkdn": Phase="Running", Reason="", readiness=true. Elapsed: 10.035674221s Mar 30 21:38:53.418: INFO: Pod "pod-subpath-test-downwardapi-mkdn": Phase="Running", Reason="", readiness=true. Elapsed: 12.039770583s Mar 30 21:38:55.423: INFO: Pod "pod-subpath-test-downwardapi-mkdn": Phase="Running", Reason="", readiness=true. Elapsed: 14.044430715s Mar 30 21:38:57.427: INFO: Pod "pod-subpath-test-downwardapi-mkdn": Phase="Running", Reason="", readiness=true. Elapsed: 16.048701035s Mar 30 21:38:59.432: INFO: Pod "pod-subpath-test-downwardapi-mkdn": Phase="Running", Reason="", readiness=true. Elapsed: 18.053852323s Mar 30 21:39:01.436: INFO: Pod "pod-subpath-test-downwardapi-mkdn": Phase="Running", Reason="", readiness=true. Elapsed: 20.058213379s Mar 30 21:39:03.441: INFO: Pod "pod-subpath-test-downwardapi-mkdn": Phase="Running", Reason="", readiness=true. Elapsed: 22.062839351s Mar 30 21:39:05.445: INFO: Pod "pod-subpath-test-downwardapi-mkdn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.067228081s STEP: Saw pod success Mar 30 21:39:05.445: INFO: Pod "pod-subpath-test-downwardapi-mkdn" satisfied condition "success or failure" Mar 30 21:39:05.448: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-mkdn container test-container-subpath-downwardapi-mkdn: STEP: delete the pod Mar 30 21:39:05.480: INFO: Waiting for pod pod-subpath-test-downwardapi-mkdn to disappear Mar 30 21:39:05.517: INFO: Pod pod-subpath-test-downwardapi-mkdn no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-mkdn Mar 30 21:39:05.517: INFO: Deleting pod "pod-subpath-test-downwardapi-mkdn" in namespace "subpath-2478" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:39:05.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2478" for this suite. • [SLOW TEST:24.272 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":121,"skipped":2088,"failed":0} SS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:39:05.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Mar 30 21:39:09.601: INFO: &Pod{ObjectMeta:{send-events-1e4e97c5-64a3-4f7e-ace0-a114c9466520 events-6305 /api/v1/namespaces/events-6305/pods/send-events-1e4e97c5-64a3-4f7e-ace0-a114c9466520 697f52f4-0826-467a-a137-dd275ae135cd 4059136 0 2020-03-30 21:39:05 +0000 UTC map[name:foo time:576632041] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-htmd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-htmd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-htmd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:39:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:39:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:39:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:39:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.211,StartTime:2020-03-30 21:39:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-30 21:39:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://adcd2716233f518f4fd9b701b98f66a348f2c159ee92deaa95d8cb19d23cec6a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.211,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Mar 30 21:39:11.606: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Mar 30 21:39:13.610: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:39:13.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-6305" for this suite. • [SLOW TEST:8.099 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":122,"skipped":2090,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:39:13.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 30 21:39:13.676: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3154fa64-6cb0-4c68-9f56-9e2b4d5dbdab" in namespace "projected-647" to be "success or failure" Mar 30 21:39:13.688: INFO: Pod "downwardapi-volume-3154fa64-6cb0-4c68-9f56-9e2b4d5dbdab": Phase="Pending", Reason="", readiness=false. Elapsed: 11.447259ms Mar 30 21:39:15.693: INFO: Pod "downwardapi-volume-3154fa64-6cb0-4c68-9f56-9e2b4d5dbdab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016769197s Mar 30 21:39:17.697: INFO: Pod "downwardapi-volume-3154fa64-6cb0-4c68-9f56-9e2b4d5dbdab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021068421s STEP: Saw pod success Mar 30 21:39:17.698: INFO: Pod "downwardapi-volume-3154fa64-6cb0-4c68-9f56-9e2b4d5dbdab" satisfied condition "success or failure" Mar 30 21:39:17.701: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3154fa64-6cb0-4c68-9f56-9e2b4d5dbdab container client-container: STEP: delete the pod Mar 30 21:39:17.758: INFO: Waiting for pod downwardapi-volume-3154fa64-6cb0-4c68-9f56-9e2b4d5dbdab to disappear Mar 30 21:39:17.764: INFO: Pod downwardapi-volume-3154fa64-6cb0-4c68-9f56-9e2b4d5dbdab no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:39:17.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-647" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2102,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:39:17.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 30 21:39:22.403: INFO: Successfully updated pod "labelsupdatedf088658-c242-4d1a-b5b3-22102122e1de" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:39:24.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5655" for this suite. • [SLOW TEST:6.671 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":124,"skipped":2114,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:39:24.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:39:35.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6345" for this suite. • [SLOW TEST:11.158 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":125,"skipped":2151,"failed":0} [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:39:35.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-3eef2d0a-1393-4548-b30a-3d492eef2084 STEP: Creating a pod to test consume secrets Mar 30 21:39:35.726: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-da0c45f1-3cbe-42e6-877f-d2bc4606a970" in namespace "projected-3749" to be "success or failure" Mar 30 21:39:35.728: INFO: Pod "pod-projected-secrets-da0c45f1-3cbe-42e6-877f-d2bc4606a970": Phase="Pending", Reason="", readiness=false. Elapsed: 2.467443ms Mar 30 21:39:37.732: INFO: Pod "pod-projected-secrets-da0c45f1-3cbe-42e6-877f-d2bc4606a970": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006475428s Mar 30 21:39:39.736: INFO: Pod "pod-projected-secrets-da0c45f1-3cbe-42e6-877f-d2bc4606a970": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010252732s STEP: Saw pod success Mar 30 21:39:39.736: INFO: Pod "pod-projected-secrets-da0c45f1-3cbe-42e6-877f-d2bc4606a970" satisfied condition "success or failure" Mar 30 21:39:39.739: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-da0c45f1-3cbe-42e6-877f-d2bc4606a970 container projected-secret-volume-test: STEP: delete the pod Mar 30 21:39:39.766: INFO: Waiting for pod pod-projected-secrets-da0c45f1-3cbe-42e6-877f-d2bc4606a970 to disappear Mar 30 21:39:39.783: INFO: Pod pod-projected-secrets-da0c45f1-3cbe-42e6-877f-d2bc4606a970 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:39:39.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3749" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2151,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:39:39.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:39:39.827: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:39:39.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5938" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":127,"skipped":2154,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:39:40.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5838 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-5838 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5838 Mar 30 21:39:40.142: INFO: Found 0 stateful pods, waiting for 1 Mar 30 21:39:50.147: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Mar 30 21:39:50.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5838 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 30 21:39:50.433: INFO: stderr: "I0330 21:39:50.296970 1127 log.go:172] (0xc00010c2c0) (0xc00064c640) Create stream\nI0330 21:39:50.297029 1127 log.go:172] (0xc00010c2c0) (0xc00064c640) Stream added, broadcasting: 1\nI0330 21:39:50.299879 1127 log.go:172] (0xc00010c2c0) Reply frame received for 1\nI0330 21:39:50.299922 1127 log.go:172] (0xc00010c2c0) (0xc000551400) Create stream\nI0330 21:39:50.299934 1127 log.go:172] (0xc00010c2c0) (0xc000551400) Stream added, broadcasting: 3\nI0330 21:39:50.301289 1127 log.go:172] (0xc00010c2c0) Reply frame received for 3\nI0330 21:39:50.301330 1127 log.go:172] (0xc00010c2c0) (0xc000a30000) Create stream\nI0330 21:39:50.301345 1127 log.go:172] (0xc00010c2c0) (0xc000a30000) Stream added, broadcasting: 5\nI0330 21:39:50.302199 1127 log.go:172] (0xc00010c2c0) Reply frame received for 5\nI0330 21:39:50.388754 1127 log.go:172] (0xc00010c2c0) Data frame received for 5\nI0330 21:39:50.388776 1127 log.go:172] (0xc000a30000) (5) Data frame handling\nI0330 21:39:50.388787 1127 log.go:172] (0xc000a30000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0330 21:39:50.427536 1127 log.go:172] (0xc00010c2c0) Data frame received for 3\nI0330 21:39:50.427573 1127 log.go:172] (0xc000551400) (3) Data frame handling\nI0330 21:39:50.427587 1127 log.go:172] (0xc000551400) (3) Data frame sent\nI0330 21:39:50.427609 1127 log.go:172] (0xc00010c2c0) Data frame received for 3\nI0330 21:39:50.427633 1127 log.go:172] (0xc000551400) (3) Data frame handling\nI0330 21:39:50.427668 1127 log.go:172] (0xc00010c2c0) Data frame received for 5\nI0330 21:39:50.427699 1127 log.go:172] (0xc000a30000) (5) Data frame handling\nI0330 21:39:50.429389 1127 log.go:172] (0xc00010c2c0) Data frame received for 1\nI0330 21:39:50.429424 1127 log.go:172] (0xc00064c640) (1) Data frame handling\nI0330 21:39:50.429448 1127 log.go:172] (0xc00064c640) (1) Data frame sent\nI0330 21:39:50.429480 1127 log.go:172] (0xc00010c2c0) (0xc00064c640) Stream removed, broadcasting: 1\nI0330 21:39:50.429519 1127 log.go:172] (0xc00010c2c0) Go away received\nI0330 21:39:50.429921 1127 log.go:172] (0xc00010c2c0) (0xc00064c640) Stream removed, broadcasting: 1\nI0330 21:39:50.429957 1127 log.go:172] (0xc00010c2c0) (0xc000551400) Stream removed, broadcasting: 3\nI0330 21:39:50.429983 1127 log.go:172] (0xc00010c2c0) (0xc000a30000) Stream removed, broadcasting: 5\n" Mar 30 21:39:50.433: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 30 21:39:50.433: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 30 21:39:50.437: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Mar 30 21:40:00.442: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 30 21:40:00.442: INFO: Waiting for statefulset status.replicas updated to 0 Mar 30 21:40:00.454: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 21:40:00.454: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:50 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:40 +0000 UTC }] Mar 30 21:40:00.454: INFO: Mar 30 21:40:00.454: INFO: StatefulSet ss has not reached scale 3, at 1 Mar 30 21:40:01.459: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996316131s Mar 30 21:40:02.587: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.992166029s Mar 30 21:40:03.591: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.863601472s Mar 30 21:40:04.596: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.859227512s Mar 30 21:40:05.620: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.854884743s Mar 30 21:40:06.645: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.830278539s Mar 30 21:40:07.653: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.80587642s Mar 30 21:40:08.657: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.797764967s Mar 30 21:40:09.663: INFO: Verifying statefulset ss doesn't scale past 3 for another 793.191336ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5838 Mar 30 21:40:10.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5838 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 30 21:40:10.887: INFO: stderr: "I0330 21:40:10.798659 1150 log.go:172] (0xc00083a840) (0xc0007b40a0) Create stream\nI0330 21:40:10.798715 1150 log.go:172] (0xc00083a840) (0xc0007b40a0) Stream added, broadcasting: 1\nI0330 21:40:10.808014 1150 log.go:172] (0xc00083a840) Reply frame received for 1\nI0330 21:40:10.808080 1150 log.go:172] (0xc00083a840) (0xc0006bba40) Create stream\nI0330 21:40:10.808098 1150 log.go:172] (0xc00083a840) (0xc0006bba40) Stream added, broadcasting: 3\nI0330 21:40:10.810277 1150 log.go:172] (0xc00083a840) Reply frame received for 3\nI0330 21:40:10.810302 1150 log.go:172] (0xc00083a840) (0xc0007b4140) Create stream\nI0330 21:40:10.810311 1150 log.go:172] (0xc00083a840) (0xc0007b4140) Stream added, broadcasting: 5\nI0330 21:40:10.811225 1150 log.go:172] (0xc00083a840) Reply frame received for 5\nI0330 21:40:10.880903 1150 log.go:172] (0xc00083a840) Data frame received for 3\nI0330 21:40:10.880948 1150 log.go:172] (0xc0006bba40) (3) Data frame handling\nI0330 21:40:10.880968 1150 log.go:172] (0xc0006bba40) (3) Data frame sent\nI0330 21:40:10.880985 1150 log.go:172] (0xc00083a840) Data frame received for 3\nI0330 21:40:10.880999 1150 log.go:172] (0xc0006bba40) (3) Data frame handling\nI0330 21:40:10.881071 1150 log.go:172] (0xc00083a840) Data frame received for 5\nI0330 21:40:10.881308 1150 log.go:172] (0xc0007b4140) (5) Data frame handling\nI0330 21:40:10.881350 1150 log.go:172] (0xc0007b4140) (5) Data frame sent\nI0330 21:40:10.881385 1150 log.go:172] (0xc00083a840) Data frame received for 5\nI0330 21:40:10.881409 1150 log.go:172] (0xc0007b4140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0330 21:40:10.883071 1150 log.go:172] (0xc00083a840) Data frame received for 1\nI0330 21:40:10.883105 1150 log.go:172] (0xc0007b40a0) (1) Data frame handling\nI0330 21:40:10.883142 1150 log.go:172] (0xc0007b40a0) (1) Data frame sent\nI0330 21:40:10.883171 1150 log.go:172] (0xc00083a840) (0xc0007b40a0) Stream removed, broadcasting: 1\nI0330 21:40:10.883208 1150 log.go:172] (0xc00083a840) Go away received\nI0330 21:40:10.883632 1150 log.go:172] (0xc00083a840) (0xc0007b40a0) Stream removed, broadcasting: 1\nI0330 21:40:10.883662 1150 log.go:172] (0xc00083a840) (0xc0006bba40) Stream removed, broadcasting: 3\nI0330 21:40:10.883677 1150 log.go:172] (0xc00083a840) (0xc0007b4140) Stream removed, broadcasting: 5\n" Mar 30 21:40:10.887: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 30 21:40:10.887: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 30 21:40:10.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5838 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 30 21:40:11.108: INFO: stderr: "I0330 21:40:11.027131 1173 log.go:172] (0xc0001056b0) (0xc0009d6000) Create stream\nI0330 21:40:11.027194 1173 log.go:172] (0xc0001056b0) (0xc0009d6000) Stream added, broadcasting: 1\nI0330 21:40:11.030439 1173 log.go:172] (0xc0001056b0) Reply frame received for 1\nI0330 21:40:11.030483 1173 log.go:172] (0xc0001056b0) (0xc0005edc20) Create stream\nI0330 21:40:11.030497 1173 log.go:172] (0xc0001056b0) (0xc0005edc20) Stream added, broadcasting: 3\nI0330 21:40:11.031610 1173 log.go:172] (0xc0001056b0) Reply frame received for 3\nI0330 21:40:11.031679 1173 log.go:172] (0xc0001056b0) (0xc00069a000) Create stream\nI0330 21:40:11.031713 1173 log.go:172] (0xc0001056b0) (0xc00069a000) Stream added, broadcasting: 5\nI0330 21:40:11.032737 1173 log.go:172] (0xc0001056b0) Reply frame received for 5\nI0330 21:40:11.101571 1173 log.go:172] (0xc0001056b0) Data frame received for 5\nI0330 21:40:11.101604 1173 log.go:172] (0xc00069a000) (5) Data frame handling\nI0330 21:40:11.101616 1173 log.go:172] (0xc00069a000) (5) Data frame sent\nI0330 21:40:11.101626 1173 log.go:172] (0xc0001056b0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0330 21:40:11.101666 1173 log.go:172] (0xc0001056b0) Data frame received for 3\nI0330 21:40:11.101709 1173 log.go:172] (0xc0005edc20) (3) Data frame handling\nI0330 21:40:11.101732 1173 log.go:172] (0xc0005edc20) (3) Data frame sent\nI0330 21:40:11.101753 1173 log.go:172] (0xc0001056b0) Data frame received for 3\nI0330 21:40:11.101783 1173 log.go:172] (0xc0005edc20) (3) Data frame handling\nI0330 21:40:11.101797 1173 log.go:172] (0xc00069a000) (5) Data frame handling\nI0330 21:40:11.103457 1173 log.go:172] (0xc0001056b0) Data frame received for 1\nI0330 21:40:11.103476 1173 log.go:172] (0xc0009d6000) (1) Data frame handling\nI0330 21:40:11.103485 1173 log.go:172] (0xc0009d6000) (1) Data frame sent\nI0330 21:40:11.103550 1173 log.go:172] (0xc0001056b0) (0xc0009d6000) Stream removed, broadcasting: 1\nI0330 21:40:11.103611 1173 log.go:172] (0xc0001056b0) Go away received\nI0330 21:40:11.104032 1173 log.go:172] (0xc0001056b0) (0xc0009d6000) Stream removed, broadcasting: 1\nI0330 21:40:11.104061 1173 log.go:172] (0xc0001056b0) (0xc0005edc20) Stream removed, broadcasting: 3\nI0330 21:40:11.104078 1173 log.go:172] (0xc0001056b0) (0xc00069a000) Stream removed, broadcasting: 5\n" Mar 30 21:40:11.108: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 30 21:40:11.108: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 30 21:40:11.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5838 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 30 21:40:11.329: INFO: stderr: "I0330 21:40:11.242648 1194 log.go:172] (0xc000a78000) (0xc000715400) Create stream\nI0330 21:40:11.242706 1194 log.go:172] (0xc000a78000) (0xc000715400) Stream added, broadcasting: 1\nI0330 21:40:11.245359 1194 log.go:172] (0xc000a78000) Reply frame received for 1\nI0330 21:40:11.245413 1194 log.go:172] (0xc000a78000) (0xc000a18000) Create stream\nI0330 21:40:11.245427 1194 log.go:172] (0xc000a78000) (0xc000a18000) Stream added, broadcasting: 3\nI0330 21:40:11.246465 1194 log.go:172] (0xc000a78000) Reply frame received for 3\nI0330 21:40:11.246506 1194 log.go:172] (0xc000a78000) (0xc0006119a0) Create stream\nI0330 21:40:11.246520 1194 log.go:172] (0xc000a78000) (0xc0006119a0) Stream added, broadcasting: 5\nI0330 21:40:11.247572 1194 log.go:172] (0xc000a78000) Reply frame received for 5\nI0330 21:40:11.322739 1194 log.go:172] (0xc000a78000) Data frame received for 3\nI0330 21:40:11.322796 1194 log.go:172] (0xc000a18000) (3) Data frame handling\nI0330 21:40:11.322815 1194 log.go:172] (0xc000a18000) (3) Data frame sent\nI0330 21:40:11.322830 1194 log.go:172] (0xc000a78000) Data frame received for 3\nI0330 21:40:11.322846 1194 log.go:172] (0xc000a18000) (3) Data frame handling\nI0330 21:40:11.322885 1194 log.go:172] (0xc000a78000) Data frame received for 5\nI0330 21:40:11.322918 1194 log.go:172] (0xc0006119a0) (5) Data frame handling\nI0330 21:40:11.322953 1194 log.go:172] (0xc0006119a0) (5) Data frame sent\nI0330 21:40:11.322979 1194 log.go:172] (0xc000a78000) Data frame received for 5\nI0330 21:40:11.323005 1194 log.go:172] (0xc0006119a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0330 21:40:11.324162 1194 log.go:172] (0xc000a78000) Data frame received for 1\nI0330 21:40:11.324193 1194 log.go:172] (0xc000715400) (1) Data frame handling\nI0330 21:40:11.324210 1194 log.go:172] (0xc000715400) (1) Data frame sent\nI0330 21:40:11.324228 1194 log.go:172] (0xc000a78000) (0xc000715400) Stream removed, broadcasting: 1\nI0330 21:40:11.324309 1194 log.go:172] (0xc000a78000) Go away received\nI0330 21:40:11.324624 1194 log.go:172] (0xc000a78000) (0xc000715400) Stream removed, broadcasting: 1\nI0330 21:40:11.324649 1194 log.go:172] (0xc000a78000) (0xc000a18000) Stream removed, broadcasting: 3\nI0330 21:40:11.324663 1194 log.go:172] (0xc000a78000) (0xc0006119a0) Stream removed, broadcasting: 5\n" Mar 30 21:40:11.329: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 30 21:40:11.329: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 30 21:40:11.333: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Mar 30 21:40:21.336: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Mar 30 21:40:21.336: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Mar 30 21:40:21.336: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Mar 30 21:40:21.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5838 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 30 21:40:21.580: INFO: stderr: "I0330 21:40:21.486743 1213 log.go:172] (0xc0009f40b0) (0xc0006e1a40) Create stream\nI0330 21:40:21.486797 1213 log.go:172] (0xc0009f40b0) (0xc0006e1a40) Stream added, broadcasting: 1\nI0330 21:40:21.489968 1213 log.go:172] (0xc0009f40b0) Reply frame received for 1\nI0330 21:40:21.490017 1213 log.go:172] (0xc0009f40b0) (0xc0009c0000) Create stream\nI0330 21:40:21.490032 1213 log.go:172] (0xc0009f40b0) (0xc0009c0000) Stream added, broadcasting: 3\nI0330 21:40:21.491086 1213 log.go:172] (0xc0009f40b0) Reply frame received for 3\nI0330 21:40:21.491125 1213 log.go:172] (0xc0009f40b0) (0xc0006e1c20) Create stream\nI0330 21:40:21.491136 1213 log.go:172] (0xc0009f40b0) (0xc0006e1c20) Stream added, broadcasting: 5\nI0330 21:40:21.492156 1213 log.go:172] (0xc0009f40b0) Reply frame received for 5\nI0330 21:40:21.573621 1213 log.go:172] (0xc0009f40b0) Data frame received for 3\nI0330 21:40:21.573674 1213 log.go:172] (0xc0009c0000) (3) Data frame handling\nI0330 21:40:21.573701 1213 log.go:172] (0xc0009c0000) (3) Data frame sent\nI0330 21:40:21.573719 1213 log.go:172] (0xc0009f40b0) Data frame received for 3\nI0330 21:40:21.573735 1213 log.go:172] (0xc0009c0000) (3) Data frame handling\nI0330 21:40:21.573758 1213 log.go:172] (0xc0009f40b0) Data frame received for 5\nI0330 21:40:21.573775 1213 log.go:172] (0xc0006e1c20) (5) Data frame handling\nI0330 21:40:21.573799 1213 log.go:172] (0xc0006e1c20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0330 21:40:21.573820 1213 log.go:172] (0xc0009f40b0) Data frame received for 5\nI0330 21:40:21.573837 1213 log.go:172] (0xc0006e1c20) (5) Data frame handling\nI0330 21:40:21.575292 1213 log.go:172] (0xc0009f40b0) Data frame received for 1\nI0330 21:40:21.575317 1213 log.go:172] (0xc0006e1a40) (1) Data frame handling\nI0330 21:40:21.575331 1213 log.go:172] (0xc0006e1a40) (1) Data frame sent\nI0330 21:40:21.575346 1213 log.go:172] (0xc0009f40b0) (0xc0006e1a40) Stream removed, broadcasting: 1\nI0330 21:40:21.575612 1213 log.go:172] (0xc0009f40b0) Go away received\nI0330 21:40:21.575729 1213 log.go:172] (0xc0009f40b0) (0xc0006e1a40) Stream removed, broadcasting: 1\nI0330 21:40:21.575750 1213 log.go:172] (0xc0009f40b0) (0xc0009c0000) Stream removed, broadcasting: 3\nI0330 21:40:21.575762 1213 log.go:172] (0xc0009f40b0) (0xc0006e1c20) Stream removed, broadcasting: 5\n" Mar 30 21:40:21.580: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 30 21:40:21.580: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 30 21:40:21.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5838 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 30 21:40:21.829: INFO: stderr: "I0330 21:40:21.719283 1235 log.go:172] (0xc000104e70) (0xc00065dcc0) Create stream\nI0330 21:40:21.719348 1235 log.go:172] (0xc000104e70) (0xc00065dcc0) Stream added, broadcasting: 1\nI0330 21:40:21.722846 1235 log.go:172] (0xc000104e70) Reply frame received for 1\nI0330 21:40:21.722899 1235 log.go:172] (0xc000104e70) (0xc00099e000) Create stream\nI0330 21:40:21.722917 1235 log.go:172] (0xc000104e70) (0xc00099e000) Stream added, broadcasting: 3\nI0330 21:40:21.723903 1235 log.go:172] (0xc000104e70) Reply frame received for 3\nI0330 21:40:21.723952 1235 log.go:172] (0xc000104e70) (0xc0004cc000) Create stream\nI0330 21:40:21.723969 1235 log.go:172] (0xc000104e70) (0xc0004cc000) Stream added, broadcasting: 5\nI0330 21:40:21.724956 1235 log.go:172] (0xc000104e70) Reply frame received for 5\nI0330 21:40:21.792222 1235 log.go:172] (0xc000104e70) Data frame received for 5\nI0330 21:40:21.792242 1235 log.go:172] (0xc0004cc000) (5) Data frame handling\nI0330 21:40:21.792257 1235 log.go:172] (0xc0004cc000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0330 21:40:21.822231 1235 log.go:172] (0xc000104e70) Data frame received for 3\nI0330 21:40:21.822315 1235 log.go:172] (0xc00099e000) (3) Data frame handling\nI0330 21:40:21.822334 1235 log.go:172] (0xc00099e000) (3) Data frame sent\nI0330 21:40:21.822496 1235 log.go:172] (0xc000104e70) Data frame received for 5\nI0330 21:40:21.822555 1235 log.go:172] (0xc0004cc000) (5) Data frame handling\nI0330 21:40:21.822588 1235 log.go:172] (0xc000104e70) Data frame received for 3\nI0330 21:40:21.822606 1235 log.go:172] (0xc00099e000) (3) Data frame handling\nI0330 21:40:21.824289 1235 log.go:172] (0xc000104e70) Data frame received for 1\nI0330 21:40:21.824327 1235 log.go:172] (0xc00065dcc0) (1) Data frame handling\nI0330 21:40:21.824357 1235 log.go:172] (0xc00065dcc0) (1) Data frame sent\nI0330 21:40:21.824386 1235 log.go:172] (0xc000104e70) (0xc00065dcc0) Stream removed, broadcasting: 1\nI0330 21:40:21.824419 1235 log.go:172] (0xc000104e70) Go away received\nI0330 21:40:21.824832 1235 log.go:172] (0xc000104e70) (0xc00065dcc0) Stream removed, broadcasting: 1\nI0330 21:40:21.824871 1235 log.go:172] (0xc000104e70) (0xc00099e000) Stream removed, broadcasting: 3\nI0330 21:40:21.824885 1235 log.go:172] (0xc000104e70) (0xc0004cc000) Stream removed, broadcasting: 5\n" Mar 30 21:40:21.829: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 30 21:40:21.829: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 30 21:40:21.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5838 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 30 21:40:22.050: INFO: stderr: "I0330 21:40:21.969406 1258 log.go:172] (0xc000117600) (0xc000984000) Create stream\nI0330 21:40:21.969470 1258 log.go:172] (0xc000117600) (0xc000984000) Stream added, broadcasting: 1\nI0330 21:40:21.971610 1258 log.go:172] (0xc000117600) Reply frame received for 1\nI0330 21:40:21.971653 1258 log.go:172] (0xc000117600) (0xc0005efa40) Create stream\nI0330 21:40:21.971663 1258 log.go:172] (0xc000117600) (0xc0005efa40) Stream added, broadcasting: 3\nI0330 21:40:21.972531 1258 log.go:172] (0xc000117600) Reply frame received for 3\nI0330 21:40:21.972561 1258 log.go:172] (0xc000117600) (0xc0005efc20) Create stream\nI0330 21:40:21.972569 1258 log.go:172] (0xc000117600) (0xc0005efc20) Stream added, broadcasting: 5\nI0330 21:40:21.973397 1258 log.go:172] (0xc000117600) Reply frame received for 5\nI0330 21:40:22.014564 1258 log.go:172] (0xc000117600) Data frame received for 5\nI0330 21:40:22.014584 1258 log.go:172] (0xc0005efc20) (5) Data frame handling\nI0330 21:40:22.014603 1258 log.go:172] (0xc0005efc20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0330 21:40:22.043438 1258 log.go:172] (0xc000117600) Data frame received for 3\nI0330 21:40:22.043465 1258 log.go:172] (0xc0005efa40) (3) Data frame handling\nI0330 21:40:22.043475 1258 log.go:172] (0xc0005efa40) (3) Data frame sent\nI0330 21:40:22.043482 1258 log.go:172] (0xc000117600) Data frame received for 3\nI0330 21:40:22.043488 1258 log.go:172] (0xc0005efa40) (3) Data frame handling\nI0330 21:40:22.043523 1258 log.go:172] (0xc000117600) Data frame received for 5\nI0330 21:40:22.043546 1258 log.go:172] (0xc0005efc20) (5) Data frame handling\nI0330 21:40:22.045355 1258 log.go:172] (0xc000117600) Data frame received for 1\nI0330 21:40:22.045391 1258 log.go:172] (0xc000984000) (1) Data frame handling\nI0330 21:40:22.045411 1258 log.go:172] (0xc000984000) (1) Data frame sent\nI0330 21:40:22.045448 1258 log.go:172] (0xc000117600) (0xc000984000) Stream removed, broadcasting: 1\nI0330 21:40:22.045477 1258 log.go:172] (0xc000117600) Go away received\nI0330 21:40:22.045891 1258 log.go:172] (0xc000117600) (0xc000984000) Stream removed, broadcasting: 1\nI0330 21:40:22.045924 1258 log.go:172] (0xc000117600) (0xc0005efa40) Stream removed, broadcasting: 3\nI0330 21:40:22.045944 1258 log.go:172] (0xc000117600) (0xc0005efc20) Stream removed, broadcasting: 5\n" Mar 30 21:40:22.050: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 30 21:40:22.050: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 30 21:40:22.050: INFO: Waiting for statefulset status.replicas updated to 0 Mar 30 21:40:22.053: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Mar 30 21:40:32.061: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Mar 30 21:40:32.061: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Mar 30 21:40:32.061: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Mar 30 21:40:32.072: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 21:40:32.072: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:40 +0000 UTC }] Mar 30 21:40:32.072: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC }] Mar 30 21:40:32.072: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC }] Mar 30 21:40:32.072: INFO: Mar 30 21:40:32.072: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 30 21:40:33.147: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 21:40:33.147: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:40 +0000 UTC }] Mar 30 21:40:33.148: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC }] Mar 30 21:40:33.148: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC }] Mar 30 21:40:33.148: INFO: Mar 30 21:40:33.148: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 30 21:40:34.152: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 21:40:34.152: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:40 +0000 UTC }] Mar 30 21:40:34.152: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC }] Mar 30 21:40:34.152: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC }] Mar 30 21:40:34.152: INFO: Mar 30 21:40:34.152: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 30 21:40:35.157: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 21:40:35.157: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:40 +0000 UTC }] Mar 30 21:40:35.157: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC }] Mar 30 21:40:35.157: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC }] Mar 30 21:40:35.157: INFO: Mar 30 21:40:35.157: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 30 21:40:36.162: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 21:40:36.162: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:40 +0000 UTC }] Mar 30 21:40:36.162: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC }] Mar 30 21:40:36.162: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC }] Mar 30 21:40:36.162: INFO: Mar 30 21:40:36.163: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 30 21:40:37.167: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 21:40:37.167: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:40 +0000 UTC }] Mar 30 21:40:37.167: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC }] Mar 30 21:40:37.167: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC }] Mar 30 21:40:37.167: INFO: Mar 30 21:40:37.167: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 30 21:40:38.172: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 21:40:38.172: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:40 +0000 UTC }] Mar 30 21:40:38.172: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC }] Mar 30 21:40:38.172: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC }] Mar 30 21:40:38.172: INFO: Mar 30 21:40:38.172: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 30 21:40:39.176: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 21:40:39.176: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:40 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:39:40 +0000 UTC }] Mar 30 21:40:39.176: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC }] Mar 30 21:40:39.176: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:22 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-30 21:40:00 +0000 UTC }] Mar 30 21:40:39.176: INFO: Mar 30 21:40:39.176: INFO: StatefulSet ss has not reached scale 0, at 3 Mar 30 21:40:40.180: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.89237795s Mar 30 21:40:41.184: INFO: Verifying statefulset ss doesn't scale past 0 for another 888.359755ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5838 Mar 30 21:40:42.188: INFO: Scaling statefulset ss to 0 Mar 30 21:40:42.198: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 30 21:40:42.200: INFO: Deleting all statefulset in ns statefulset-5838 Mar 30 21:40:42.202: INFO: Scaling statefulset ss to 0 Mar 30 21:40:42.208: INFO: Waiting for statefulset status.replicas updated to 0 Mar 30 21:40:42.210: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:40:42.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5838" for this suite. • [SLOW TEST:62.212 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":128,"skipped":2177,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:40:42.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:40:46.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3931" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":129,"skipped":2181,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:40:46.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:40:50.565: INFO: Waiting up to 5m0s for pod "client-envvars-faa03dce-d6be-4b2e-9040-28e064976733" in namespace "pods-7518" to be "success or failure" Mar 30 21:40:50.575: INFO: Pod "client-envvars-faa03dce-d6be-4b2e-9040-28e064976733": Phase="Pending", Reason="", readiness=false. Elapsed: 9.052441ms Mar 30 21:40:52.578: INFO: Pod "client-envvars-faa03dce-d6be-4b2e-9040-28e064976733": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012854477s Mar 30 21:40:54.583: INFO: Pod "client-envvars-faa03dce-d6be-4b2e-9040-28e064976733": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017110824s STEP: Saw pod success Mar 30 21:40:54.583: INFO: Pod "client-envvars-faa03dce-d6be-4b2e-9040-28e064976733" satisfied condition "success or failure" Mar 30 21:40:54.586: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-faa03dce-d6be-4b2e-9040-28e064976733 container env3cont: STEP: delete the pod Mar 30 21:40:54.607: INFO: Waiting for pod client-envvars-faa03dce-d6be-4b2e-9040-28e064976733 to disappear Mar 30 21:40:54.611: INFO: Pod client-envvars-faa03dce-d6be-4b2e-9040-28e064976733 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:40:54.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7518" for this suite. • [SLOW TEST:8.215 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2189,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:40:54.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 30 21:40:54.713: INFO: Waiting up to 5m0s for pod "downwardapi-volume-31d62e26-6f50-4924-b593-fd71b73badc8" in namespace "downward-api-8314" to be "success or failure" Mar 30 21:40:54.733: INFO: Pod "downwardapi-volume-31d62e26-6f50-4924-b593-fd71b73badc8": Phase="Pending", Reason="", readiness=false. Elapsed: 20.64594ms Mar 30 21:40:56.738: INFO: Pod "downwardapi-volume-31d62e26-6f50-4924-b593-fd71b73badc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025184065s Mar 30 21:40:58.742: INFO: Pod "downwardapi-volume-31d62e26-6f50-4924-b593-fd71b73badc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029544231s STEP: Saw pod success Mar 30 21:40:58.742: INFO: Pod "downwardapi-volume-31d62e26-6f50-4924-b593-fd71b73badc8" satisfied condition "success or failure" Mar 30 21:40:58.746: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-31d62e26-6f50-4924-b593-fd71b73badc8 container client-container: STEP: delete the pod Mar 30 21:40:58.777: INFO: Waiting for pod downwardapi-volume-31d62e26-6f50-4924-b593-fd71b73badc8 to disappear Mar 30 21:40:58.791: INFO: Pod downwardapi-volume-31d62e26-6f50-4924-b593-fd71b73badc8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:40:58.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8314" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2229,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:40:58.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-2ba3380e-ed02-4e58-835a-9b774b1cce5f STEP: Creating a pod to test consume secrets Mar 30 21:40:58.912: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-07dfcfa7-c345-4985-ba45-abdb0672810f" in namespace "projected-4421" to be "success or failure" Mar 30 21:40:58.962: INFO: Pod "pod-projected-secrets-07dfcfa7-c345-4985-ba45-abdb0672810f": Phase="Pending", Reason="", readiness=false. Elapsed: 49.644018ms Mar 30 21:41:00.998: INFO: Pod "pod-projected-secrets-07dfcfa7-c345-4985-ba45-abdb0672810f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085837996s Mar 30 21:41:03.028: INFO: Pod "pod-projected-secrets-07dfcfa7-c345-4985-ba45-abdb0672810f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11533975s STEP: Saw pod success Mar 30 21:41:03.028: INFO: Pod "pod-projected-secrets-07dfcfa7-c345-4985-ba45-abdb0672810f" satisfied condition "success or failure" Mar 30 21:41:03.031: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-07dfcfa7-c345-4985-ba45-abdb0672810f container projected-secret-volume-test: STEP: delete the pod Mar 30 21:41:03.060: INFO: Waiting for pod pod-projected-secrets-07dfcfa7-c345-4985-ba45-abdb0672810f to disappear Mar 30 21:41:03.078: INFO: Pod pod-projected-secrets-07dfcfa7-c345-4985-ba45-abdb0672810f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:41:03.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4421" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2242,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:41:03.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Mar 30 21:41:03.149: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:41:19.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8689" for this suite. • [SLOW TEST:16.415 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2279,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:41:19.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-9e9b3fe9-ae6f-4dd4-931c-5883506dfaad in namespace container-probe-6806 Mar 30 21:41:23.574: INFO: Started pod liveness-9e9b3fe9-ae6f-4dd4-931c-5883506dfaad in namespace container-probe-6806 STEP: checking the pod's current state and verifying that restartCount is present Mar 30 21:41:23.577: INFO: Initial restart count of pod liveness-9e9b3fe9-ae6f-4dd4-931c-5883506dfaad is 0 Mar 30 21:41:39.612: INFO: Restart count of pod container-probe-6806/liveness-9e9b3fe9-ae6f-4dd4-931c-5883506dfaad is now 1 (16.035013662s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:41:39.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6806" for this suite. • [SLOW TEST:20.136 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2315,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:41:39.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Mar 30 21:41:39.692: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Mar 30 21:41:40.139: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Mar 30 21:41:42.415: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201300, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201300, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201300, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201300, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 21:41:44.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201300, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201300, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201300, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201300, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 21:41:47.052: INFO: Waited 627.815103ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:41:47.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-1733" for this suite. • [SLOW TEST:7.973 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":135,"skipped":2328,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:41:47.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 30 21:41:52.409: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d82604a4-0f22-4c76-ae7a-071eaac1fbd9" Mar 30 21:41:52.409: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d82604a4-0f22-4c76-ae7a-071eaac1fbd9" in namespace "pods-606" to be "terminated due to deadline exceeded" Mar 30 21:41:52.415: INFO: Pod "pod-update-activedeadlineseconds-d82604a4-0f22-4c76-ae7a-071eaac1fbd9": Phase="Running", Reason="", readiness=true. Elapsed: 5.564932ms Mar 30 21:41:54.419: INFO: Pod "pod-update-activedeadlineseconds-d82604a4-0f22-4c76-ae7a-071eaac1fbd9": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.009642031s Mar 30 21:41:54.419: INFO: Pod "pod-update-activedeadlineseconds-d82604a4-0f22-4c76-ae7a-071eaac1fbd9" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:41:54.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-606" for this suite. • [SLOW TEST:6.832 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2358,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:41:54.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Mar 30 21:41:54.504: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:41:54.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4060" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":137,"skipped":2373,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:41:54.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1464 STEP: creating an pod Mar 30 21:41:54.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-8150 -- logs-generator --log-lines-total 100 --run-duration 20s' Mar 30 21:41:57.183: INFO: stderr: "" Mar 30 21:41:57.183: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Mar 30 21:41:57.183: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Mar 30 21:41:57.183: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8150" to be "running and ready, or succeeded" Mar 30 21:41:57.195: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 12.369191ms Mar 30 21:41:59.199: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016580666s Mar 30 21:42:01.204: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.020848888s Mar 30 21:42:01.204: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Mar 30 21:42:01.204: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Mar 30 21:42:01.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8150' Mar 30 21:42:01.336: INFO: stderr: "" Mar 30 21:42:01.336: INFO: stdout: "I0330 21:41:59.640414 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/default/pods/t7r 463\nI0330 21:41:59.840514 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/2jt 419\nI0330 21:42:00.040644 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/r4vv 592\nI0330 21:42:00.240639 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/kkq 494\nI0330 21:42:00.440588 1 logs_generator.go:76] 4 POST /api/v1/namespaces/kube-system/pods/g86 424\nI0330 21:42:00.640624 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/2vds 596\nI0330 21:42:00.840602 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/g7k 585\nI0330 21:42:01.040612 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/kube-system/pods/gs5 465\nI0330 21:42:01.240565 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/4l9 278\n" STEP: limiting log lines Mar 30 21:42:01.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8150 --tail=1' Mar 30 21:42:01.439: INFO: stderr: "" Mar 30 21:42:01.439: INFO: stdout: "I0330 21:42:01.240565 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/4l9 278\n" Mar 30 21:42:01.439: INFO: got output "I0330 21:42:01.240565 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/4l9 278\n" STEP: limiting log bytes Mar 30 21:42:01.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8150 --limit-bytes=1' Mar 30 21:42:01.542: INFO: stderr: "" Mar 30 21:42:01.542: INFO: stdout: "I" Mar 30 21:42:01.542: INFO: got output "I" STEP: exposing timestamps Mar 30 21:42:01.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8150 --tail=1 --timestamps' Mar 30 21:42:01.645: INFO: stderr: "" Mar 30 21:42:01.645: INFO: stdout: "2020-03-30T21:42:01.440704181Z I0330 21:42:01.440582 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/xhd 270\n2020-03-30T21:42:01.640711373Z I0330 21:42:01.640548 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/2plg 525\n" Mar 30 21:42:01.645: INFO: got output "2020-03-30T21:42:01.440704181Z I0330 21:42:01.440582 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/ns/pods/xhd 270\n2020-03-30T21:42:01.640711373Z I0330 21:42:01.640548 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/kube-system/pods/2plg 525\n" Mar 30 21:42:01.645: FAIL: Expected : 2 to equal : 1 [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1470 Mar 30 21:42:01.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8150' Mar 30 21:42:09.260: INFO: stderr: "" Mar 30 21:42:09.260: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 STEP: Collecting events from namespace "kubectl-8150". STEP: Found 5 events. Mar 30 21:42:09.271: INFO: At 2020-03-30 21:41:57 +0000 UTC - event for logs-generator: {default-scheduler } Scheduled: Successfully assigned kubectl-8150/logs-generator to jerma-worker Mar 30 21:42:09.271: INFO: At 2020-03-30 21:41:58 +0000 UTC - event for logs-generator: {kubelet jerma-worker} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine Mar 30 21:42:09.271: INFO: At 2020-03-30 21:41:59 +0000 UTC - event for logs-generator: {kubelet jerma-worker} Created: Created container logs-generator Mar 30 21:42:09.271: INFO: At 2020-03-30 21:41:59 +0000 UTC - event for logs-generator: {kubelet jerma-worker} Started: Started container logs-generator Mar 30 21:42:09.271: INFO: At 2020-03-30 21:42:01 +0000 UTC - event for logs-generator: {kubelet jerma-worker} Killing: Stopping container logs-generator Mar 30 21:42:09.274: INFO: POD NODE PHASE GRACE CONDITIONS Mar 30 21:42:09.274: INFO: Mar 30 21:42:09.277: INFO: Logging node info for node jerma-control-plane Mar 30 21:42:09.279: INFO: Node Info: &Node{ObjectMeta:{jerma-control-plane /api/v1/nodes/jerma-control-plane a3f47ead-f913-4a01-918b-faa66ed74dd8 4059909 0 2020-03-15 18:25:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-30 21:41:05 +0000 UTC,LastTransitionTime:2020-03-15 18:25:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-30 21:41:05 +0000 UTC,LastTransitionTime:2020-03-15 18:25:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-30 21:41:05 +0000 UTC,LastTransitionTime:2020-03-15 18:25:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-30 21:41:05 +0000 UTC,LastTransitionTime:2020-03-15 18:26:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.9,},NodeAddress{Type:Hostname,Address:jerma-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bcfb16fe77247d3af07bed975350d5c,SystemUUID:947a2db5-5527-4203-8af5-13d97ffe8a80,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2-31-gaa877d78,KubeletVersion:v1.17.2,KubeProxyVersion:v1.17.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.2],SizeBytes:144352049,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.2],SizeBytes:132096126,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.2],SizeBytes:131180355,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.2],SizeBytes:111937841,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 30 21:42:09.279: INFO: Logging kubelet events for node jerma-control-plane Mar 30 21:42:09.281: INFO: Logging pods the kubelet thinks is on node jerma-control-plane Mar 30 21:42:09.304: INFO: local-path-provisioner-85445b74d4-7mg5w started at 2020-03-15 18:26:27 +0000 UTC (0+1 container statuses recorded) Mar 30 21:42:09.304: INFO: Container local-path-provisioner ready: true, restart count 0 Mar 30 21:42:09.304: INFO: kube-apiserver-jerma-control-plane started at 2020-03-15 18:25:57 +0000 UTC (0+1 container statuses recorded) Mar 30 21:42:09.304: INFO: Container kube-apiserver ready: true, restart count 0 Mar 30 21:42:09.304: INFO: kube-controller-manager-jerma-control-plane started at 2020-03-15 18:25:57 +0000 UTC (0+1 container statuses recorded) Mar 30 21:42:09.304: INFO: Container kube-controller-manager ready: true, restart count 0 Mar 30 21:42:09.304: INFO: kube-proxy-mm9zd started at 2020-03-15 18:26:13 +0000 UTC (0+1 container statuses recorded) Mar 30 21:42:09.304: INFO: Container kube-proxy ready: true, restart count 0 Mar 30 21:42:09.304: INFO: kindnet-bjddj started at 2020-03-15 18:26:13 +0000 UTC (0+1 container statuses recorded) Mar 30 21:42:09.304: INFO: Container kindnet-cni ready: true, restart count 0 Mar 30 21:42:09.304: INFO: coredns-6955765f44-svxk5 started at 2020-03-15 18:26:28 +0000 UTC (0+1 container statuses recorded) Mar 30 21:42:09.304: INFO: Container coredns ready: true, restart count 0 Mar 30 21:42:09.304: INFO: coredns-6955765f44-rll5s started at 2020-03-15 18:26:28 +0000 UTC (0+1 container statuses recorded) Mar 30 21:42:09.304: INFO: Container coredns ready: true, restart count 0 Mar 30 21:42:09.304: INFO: etcd-jerma-control-plane started at 2020-03-15 18:25:57 +0000 UTC (0+1 container statuses recorded) Mar 30 21:42:09.304: INFO: Container etcd ready: true, restart count 0 Mar 30 21:42:09.304: INFO: kube-scheduler-jerma-control-plane started at 2020-03-15 18:25:57 +0000 UTC (0+1 container statuses recorded) Mar 30 21:42:09.304: INFO: Container kube-scheduler ready: true, restart count 0 W0330 21:42:09.308384 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 30 21:42:09.394: INFO: Latency metrics for node jerma-control-plane Mar 30 21:42:09.394: INFO: Logging node info for node jerma-worker Mar 30 21:42:09.398: INFO: Node Info: &Node{ObjectMeta:{jerma-worker /api/v1/nodes/jerma-worker d3be6d4b-da1a-4024-b031-0d2aac4bfa20 4058290 0 2020-03-15 18:26:33 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-30 21:37:21 +0000 UTC,LastTransitionTime:2020-03-15 18:26:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-30 21:37:21 +0000 UTC,LastTransitionTime:2020-03-15 18:26:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-30 21:37:21 +0000 UTC,LastTransitionTime:2020-03-15 18:26:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-30 21:37:21 +0000 UTC,LastTransitionTime:2020-03-15 18:27:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.10,},NodeAddress{Type:Hostname,Address:jerma-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a1961fc66ec8469d814538695177d17d,SystemUUID:0df80521-e1b3-45a7-be2b-b3bd800b8699,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2-31-gaa877d78,KubeletVersion:v1.17.2,KubeProxyVersion:v1.17.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.2],SizeBytes:144352049,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.2],SizeBytes:132096126,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.2],SizeBytes:131180355,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.2],SizeBytes:111937841,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:5e21ed2c67f8015ed449f4402c942d8200a0b59cc0b518744e2e45a3de219ba9 docker.io/aquasec/kube-bench:latest],SizeBytes:8028777,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:4331310,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:1799936,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[docker.io/library/busybox@sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135 docker.io/library/busybox:latest],SizeBytes:764687,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 30 21:42:09.399: INFO: Logging kubelet events for node jerma-worker Mar 30 21:42:09.402: INFO: Logging pods the kubelet thinks is on node jerma-worker Mar 30 21:42:09.407: INFO: kindnet-c5svj started at 2020-03-15 18:26:33 +0000 UTC (0+1 container statuses recorded) Mar 30 21:42:09.407: INFO: Container kindnet-cni ready: true, restart count 0 Mar 30 21:42:09.407: INFO: kube-proxy-44mlz started at 2020-03-15 18:26:33 +0000 UTC (0+1 container statuses recorded) Mar 30 21:42:09.407: INFO: Container kube-proxy ready: true, restart count 0 W0330 21:42:09.411085 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 30 21:42:09.460: INFO: Latency metrics for node jerma-worker Mar 30 21:42:09.460: INFO: Logging node info for node jerma-worker2 Mar 30 21:42:09.463: INFO: Node Info: &Node{ObjectMeta:{jerma-worker2 /api/v1/nodes/jerma-worker2 9b2e5b39-8dbb-4119-80fd-75a84fb601d7 4059646 0 2020-03-15 18:26:33 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-03-30 21:40:33 +0000 UTC,LastTransitionTime:2020-03-15 18:26:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-03-30 21:40:33 +0000 UTC,LastTransitionTime:2020-03-15 18:26:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-03-30 21:40:33 +0000 UTC,LastTransitionTime:2020-03-15 18:26:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-03-30 21:40:33 +0000 UTC,LastTransitionTime:2020-03-15 18:27:14 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.8,},NodeAddress{Type:Hostname,Address:jerma-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f27cacf2d4974d3480d11dd8736e63d5,SystemUUID:6fef03e6-b656-4894-b57f-89d5451db372,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2-31-gaa877d78,KubeletVersion:v1.17.2,KubeProxyVersion:v1.17.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.2],SizeBytes:144352049,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.2],SizeBytes:132096126,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.2],SizeBytes:131180355,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:563f44851d413c7199a0a8a2a13df1e98bee48229e19f4917e6da68e5482df6e docker.io/aquasec/kube-hunter:latest],SizeBytes:123995068,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.2],SizeBytes:111937841,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:16222606,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:5e21ed2c67f8015ed449f4402c942d8200a0b59cc0b518744e2e45a3de219ba9 docker.io/aquasec/kube-bench:latest],SizeBytes:8028777,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:4331310,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:1799936,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[docker.io/library/busybox@sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135 docker.io/library/busybox:latest],SizeBytes:764687,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Mar 30 21:42:09.464: INFO: Logging kubelet events for node jerma-worker2 Mar 30 21:42:09.467: INFO: Logging pods the kubelet thinks is on node jerma-worker2 Mar 30 21:42:09.473: INFO: kindnet-zk6sq started at 2020-03-15 18:26:33 +0000 UTC (0+1 container statuses recorded) Mar 30 21:42:09.473: INFO: Container kindnet-cni ready: true, restart count 0 Mar 30 21:42:09.473: INFO: kube-bench-hk6h6 started at 2020-03-26 15:21:52 +0000 UTC (0+1 container statuses recorded) Mar 30 21:42:09.473: INFO: Container kube-bench ready: false, restart count 0 Mar 30 21:42:09.473: INFO: kube-proxy-75q42 started at 2020-03-15 18:26:33 +0000 UTC (0+1 container statuses recorded) Mar 30 21:42:09.473: INFO: Container kube-proxy ready: true, restart count 0 Mar 30 21:42:09.473: INFO: kube-hunter-8g6pb started at 2020-03-26 15:21:33 +0000 UTC (0+1 container statuses recorded) Mar 30 21:42:09.473: INFO: Container kube-hunter ready: false, restart count 0 W0330 21:42:09.476891 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 30 21:42:09.537: INFO: Latency metrics for node jerma-worker2 Mar 30 21:42:09.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8150" for this suite. • Failure [14.940 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1460 should be able to retrieve and filter logs [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:42:01.645: Expected : 2 to equal : 1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1517 ------------------------------ {"msg":"FAILED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":137,"skipped":2375,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:42:09.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:42:09.594: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Mar 30 21:42:11.634: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:42:12.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-504" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":138,"skipped":2377,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:42:12.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:42:18.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9738" for this suite. • [SLOW TEST:5.762 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":139,"skipped":2379,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:42:18.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Mar 30 21:42:18.569: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix167389866/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:42:18.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-835" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":140,"skipped":2380,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:42:18.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 30 21:42:18.989: INFO: Waiting up to 5m0s for pod "downwardapi-volume-998d6fbf-1b6a-4ae1-b23c-2461f5dbe71c" in namespace "downward-api-6139" to be "success or failure" Mar 30 21:42:19.003: INFO: Pod "downwardapi-volume-998d6fbf-1b6a-4ae1-b23c-2461f5dbe71c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.675104ms Mar 30 21:42:21.007: INFO: Pod "downwardapi-volume-998d6fbf-1b6a-4ae1-b23c-2461f5dbe71c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018029176s Mar 30 21:42:23.023: INFO: Pod "downwardapi-volume-998d6fbf-1b6a-4ae1-b23c-2461f5dbe71c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034129491s STEP: Saw pod success Mar 30 21:42:23.023: INFO: Pod "downwardapi-volume-998d6fbf-1b6a-4ae1-b23c-2461f5dbe71c" satisfied condition "success or failure" Mar 30 21:42:23.026: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-998d6fbf-1b6a-4ae1-b23c-2461f5dbe71c container client-container: STEP: delete the pod Mar 30 21:42:23.040: INFO: Waiting for pod downwardapi-volume-998d6fbf-1b6a-4ae1-b23c-2461f5dbe71c to disappear Mar 30 21:42:23.054: INFO: Pod downwardapi-volume-998d6fbf-1b6a-4ae1-b23c-2461f5dbe71c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:42:23.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6139" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2380,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:42:23.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-f03cd4ed-f7e1-49b1-b000-fbb16a01bc72 STEP: Creating configMap with name cm-test-opt-upd-78ec599d-99f9-490d-a9aa-e591a9b21f53 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f03cd4ed-f7e1-49b1-b000-fbb16a01bc72 STEP: Updating configmap cm-test-opt-upd-78ec599d-99f9-490d-a9aa-e591a9b21f53 STEP: Creating configMap with name cm-test-opt-create-887b209d-758a-4639-b25b-a0b36d50d45a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:43:57.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2502" for this suite. • [SLOW TEST:94.655 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2381,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:43:57.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7381 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7381 I0330 21:43:57.869783 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7381, replica count: 2 I0330 21:44:00.920245 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0330 21:44:03.920459 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 30 21:44:03.920: INFO: Creating new exec pod Mar 30 21:44:08.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7381 execpodch9w8 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 30 21:44:09.138: INFO: stderr: "I0330 21:44:09.072889 1456 log.go:172] (0xc0007bc9a0) (0xc0007b8000) Create stream\nI0330 21:44:09.072952 1456 log.go:172] (0xc0007bc9a0) (0xc0007b8000) Stream added, broadcasting: 1\nI0330 21:44:09.075666 1456 log.go:172] (0xc0007bc9a0) Reply frame received for 1\nI0330 21:44:09.075699 1456 log.go:172] (0xc0007bc9a0) (0xc0008280a0) Create stream\nI0330 21:44:09.075714 1456 log.go:172] (0xc0007bc9a0) (0xc0008280a0) Stream added, broadcasting: 3\nI0330 21:44:09.076718 1456 log.go:172] (0xc0007bc9a0) Reply frame received for 3\nI0330 21:44:09.076760 1456 log.go:172] (0xc0007bc9a0) (0xc0007b80a0) Create stream\nI0330 21:44:09.076776 1456 log.go:172] (0xc0007bc9a0) (0xc0007b80a0) Stream added, broadcasting: 5\nI0330 21:44:09.078162 1456 log.go:172] (0xc0007bc9a0) Reply frame received for 5\nI0330 21:44:09.130646 1456 log.go:172] (0xc0007bc9a0) Data frame received for 5\nI0330 21:44:09.130710 1456 log.go:172] (0xc0007b80a0) (5) Data frame handling\nI0330 21:44:09.130741 1456 log.go:172] (0xc0007b80a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0330 21:44:09.130903 1456 log.go:172] (0xc0007bc9a0) Data frame received for 5\nI0330 21:44:09.130934 1456 log.go:172] (0xc0007b80a0) (5) Data frame handling\nI0330 21:44:09.130961 1456 log.go:172] (0xc0007b80a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0330 21:44:09.131203 1456 log.go:172] (0xc0007bc9a0) Data frame received for 5\nI0330 21:44:09.131229 1456 log.go:172] (0xc0007b80a0) (5) Data frame handling\nI0330 21:44:09.131505 1456 log.go:172] (0xc0007bc9a0) Data frame received for 3\nI0330 21:44:09.131536 1456 log.go:172] (0xc0008280a0) (3) Data frame handling\nI0330 21:44:09.133497 1456 log.go:172] (0xc0007bc9a0) Data frame received for 1\nI0330 21:44:09.133531 1456 log.go:172] (0xc0007b8000) (1) Data frame handling\nI0330 21:44:09.133558 1456 log.go:172] (0xc0007b8000) (1) Data frame sent\nI0330 21:44:09.133582 1456 log.go:172] (0xc0007bc9a0) (0xc0007b8000) Stream removed, broadcasting: 1\nI0330 21:44:09.133603 1456 log.go:172] (0xc0007bc9a0) Go away received\nI0330 21:44:09.134165 1456 log.go:172] (0xc0007bc9a0) (0xc0007b8000) Stream removed, broadcasting: 1\nI0330 21:44:09.134196 1456 log.go:172] (0xc0007bc9a0) (0xc0008280a0) Stream removed, broadcasting: 3\nI0330 21:44:09.134209 1456 log.go:172] (0xc0007bc9a0) (0xc0007b80a0) Stream removed, broadcasting: 5\n" Mar 30 21:44:09.139: INFO: stdout: "" Mar 30 21:44:09.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7381 execpodch9w8 -- /bin/sh -x -c nc -zv -t -w 2 10.109.157.232 80' Mar 30 21:44:09.322: INFO: stderr: "I0330 21:44:09.259477 1478 log.go:172] (0xc000104f20) (0xc0009b0500) Create stream\nI0330 21:44:09.259550 1478 log.go:172] (0xc000104f20) (0xc0009b0500) Stream added, broadcasting: 1\nI0330 21:44:09.266600 1478 log.go:172] (0xc000104f20) Reply frame received for 1\nI0330 21:44:09.266633 1478 log.go:172] (0xc000104f20) (0xc00096e0a0) Create stream\nI0330 21:44:09.266648 1478 log.go:172] (0xc000104f20) (0xc00096e0a0) Stream added, broadcasting: 3\nI0330 21:44:09.267761 1478 log.go:172] (0xc000104f20) Reply frame received for 3\nI0330 21:44:09.267853 1478 log.go:172] (0xc000104f20) (0xc0009b05a0) Create stream\nI0330 21:44:09.267871 1478 log.go:172] (0xc000104f20) (0xc0009b05a0) Stream added, broadcasting: 5\nI0330 21:44:09.268713 1478 log.go:172] (0xc000104f20) Reply frame received for 5\nI0330 21:44:09.316081 1478 log.go:172] (0xc000104f20) Data frame received for 5\nI0330 21:44:09.316177 1478 log.go:172] (0xc0009b05a0) (5) Data frame handling\nI0330 21:44:09.316211 1478 log.go:172] (0xc0009b05a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.109.157.232 80\nConnection to 10.109.157.232 80 port [tcp/http] succeeded!\nI0330 21:44:09.316251 1478 log.go:172] (0xc000104f20) Data frame received for 3\nI0330 21:44:09.316292 1478 log.go:172] (0xc00096e0a0) (3) Data frame handling\nI0330 21:44:09.316337 1478 log.go:172] (0xc000104f20) Data frame received for 5\nI0330 21:44:09.316356 1478 log.go:172] (0xc0009b05a0) (5) Data frame handling\nI0330 21:44:09.318442 1478 log.go:172] (0xc000104f20) Data frame received for 1\nI0330 21:44:09.318486 1478 log.go:172] (0xc0009b0500) (1) Data frame handling\nI0330 21:44:09.318518 1478 log.go:172] (0xc0009b0500) (1) Data frame sent\nI0330 21:44:09.318553 1478 log.go:172] (0xc000104f20) (0xc0009b0500) Stream removed, broadcasting: 1\nI0330 21:44:09.318568 1478 log.go:172] (0xc000104f20) Go away received\nI0330 21:44:09.318913 1478 log.go:172] (0xc000104f20) (0xc0009b0500) Stream removed, broadcasting: 1\nI0330 21:44:09.318926 1478 log.go:172] (0xc000104f20) (0xc00096e0a0) Stream removed, broadcasting: 3\nI0330 21:44:09.318932 1478 log.go:172] (0xc000104f20) (0xc0009b05a0) Stream removed, broadcasting: 5\n" Mar 30 21:44:09.322: INFO: stdout: "" Mar 30 21:44:09.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7381 execpodch9w8 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30527' Mar 30 21:44:09.540: INFO: stderr: "I0330 21:44:09.466309 1497 log.go:172] (0xc000104e70) (0xc0006a0140) Create stream\nI0330 21:44:09.466359 1497 log.go:172] (0xc000104e70) (0xc0006a0140) Stream added, broadcasting: 1\nI0330 21:44:09.468914 1497 log.go:172] (0xc000104e70) Reply frame received for 1\nI0330 21:44:09.468961 1497 log.go:172] (0xc000104e70) (0xc0006d7a40) Create stream\nI0330 21:44:09.468977 1497 log.go:172] (0xc000104e70) (0xc0006d7a40) Stream added, broadcasting: 3\nI0330 21:44:09.470256 1497 log.go:172] (0xc000104e70) Reply frame received for 3\nI0330 21:44:09.470316 1497 log.go:172] (0xc000104e70) (0xc000626640) Create stream\nI0330 21:44:09.470349 1497 log.go:172] (0xc000104e70) (0xc000626640) Stream added, broadcasting: 5\nI0330 21:44:09.471284 1497 log.go:172] (0xc000104e70) Reply frame received for 5\nI0330 21:44:09.534991 1497 log.go:172] (0xc000104e70) Data frame received for 3\nI0330 21:44:09.535047 1497 log.go:172] (0xc0006d7a40) (3) Data frame handling\nI0330 21:44:09.535082 1497 log.go:172] (0xc000104e70) Data frame received for 5\nI0330 21:44:09.535107 1497 log.go:172] (0xc000626640) (5) Data frame handling\nI0330 21:44:09.535129 1497 log.go:172] (0xc000626640) (5) Data frame sent\nI0330 21:44:09.535152 1497 log.go:172] (0xc000104e70) Data frame received for 5\nI0330 21:44:09.535170 1497 log.go:172] (0xc000626640) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 30527\nConnection to 172.17.0.10 30527 port [tcp/30527] succeeded!\nI0330 21:44:09.536363 1497 log.go:172] (0xc000104e70) Data frame received for 1\nI0330 21:44:09.536385 1497 log.go:172] (0xc0006a0140) (1) Data frame handling\nI0330 21:44:09.536394 1497 log.go:172] (0xc0006a0140) (1) Data frame sent\nI0330 21:44:09.536403 1497 log.go:172] (0xc000104e70) (0xc0006a0140) Stream removed, broadcasting: 1\nI0330 21:44:09.536495 1497 log.go:172] (0xc000104e70) Go away received\nI0330 21:44:09.536683 1497 log.go:172] (0xc000104e70) (0xc0006a0140) Stream removed, broadcasting: 1\nI0330 21:44:09.536697 1497 log.go:172] (0xc000104e70) (0xc0006d7a40) Stream removed, broadcasting: 3\nI0330 21:44:09.536704 1497 log.go:172] (0xc000104e70) (0xc000626640) Stream removed, broadcasting: 5\n" Mar 30 21:44:09.540: INFO: stdout: "" Mar 30 21:44:09.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7381 execpodch9w8 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30527' Mar 30 21:44:09.731: INFO: stderr: "I0330 21:44:09.667410 1518 log.go:172] (0xc0009e4000) (0xc000b36000) Create stream\nI0330 21:44:09.667497 1518 log.go:172] (0xc0009e4000) (0xc000b36000) Stream added, broadcasting: 1\nI0330 21:44:09.669781 1518 log.go:172] (0xc0009e4000) Reply frame received for 1\nI0330 21:44:09.669805 1518 log.go:172] (0xc0009e4000) (0xc000b360a0) Create stream\nI0330 21:44:09.669812 1518 log.go:172] (0xc0009e4000) (0xc000b360a0) Stream added, broadcasting: 3\nI0330 21:44:09.670515 1518 log.go:172] (0xc0009e4000) Reply frame received for 3\nI0330 21:44:09.670546 1518 log.go:172] (0xc0009e4000) (0xc000b361e0) Create stream\nI0330 21:44:09.670555 1518 log.go:172] (0xc0009e4000) (0xc000b361e0) Stream added, broadcasting: 5\nI0330 21:44:09.671314 1518 log.go:172] (0xc0009e4000) Reply frame received for 5\nI0330 21:44:09.725585 1518 log.go:172] (0xc0009e4000) Data frame received for 3\nI0330 21:44:09.725620 1518 log.go:172] (0xc000b360a0) (3) Data frame handling\nI0330 21:44:09.725640 1518 log.go:172] (0xc0009e4000) Data frame received for 5\nI0330 21:44:09.725654 1518 log.go:172] (0xc000b361e0) (5) Data frame handling\nI0330 21:44:09.725661 1518 log.go:172] (0xc000b361e0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 30527\nConnection to 172.17.0.8 30527 port [tcp/30527] succeeded!\nI0330 21:44:09.725755 1518 log.go:172] (0xc0009e4000) Data frame received for 5\nI0330 21:44:09.725772 1518 log.go:172] (0xc000b361e0) (5) Data frame handling\nI0330 21:44:09.727353 1518 log.go:172] (0xc0009e4000) Data frame received for 1\nI0330 21:44:09.727366 1518 log.go:172] (0xc000b36000) (1) Data frame handling\nI0330 21:44:09.727372 1518 log.go:172] (0xc000b36000) (1) Data frame sent\nI0330 21:44:09.727518 1518 log.go:172] (0xc0009e4000) (0xc000b36000) Stream removed, broadcasting: 1\nI0330 21:44:09.727587 1518 log.go:172] (0xc0009e4000) Go away received\nI0330 21:44:09.728006 1518 log.go:172] (0xc0009e4000) (0xc000b36000) Stream removed, broadcasting: 1\nI0330 21:44:09.728031 1518 log.go:172] (0xc0009e4000) (0xc000b360a0) Stream removed, broadcasting: 3\nI0330 21:44:09.728044 1518 log.go:172] (0xc0009e4000) (0xc000b361e0) Stream removed, broadcasting: 5\n" Mar 30 21:44:09.731: INFO: stdout: "" Mar 30 21:44:09.731: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:44:09.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7381" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.066 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":143,"skipped":2397,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:44:09.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1788 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 30 21:44:09.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1443' Mar 30 21:44:09.929: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 30 21:44:09.929: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1793 Mar 30 21:44:09.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-1443' Mar 30 21:44:10.048: INFO: stderr: "" Mar 30 21:44:10.048: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:44:10.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1443" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":144,"skipped":2398,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:44:10.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-8308bdd3-079e-4438-b778-7febe12f04a9 STEP: Creating a pod to test consume configMaps Mar 30 21:44:10.150: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9e1bbc98-7961-4d4f-ad3a-21c355b29e71" in namespace "projected-4234" to be "success or failure" Mar 30 21:44:10.154: INFO: Pod "pod-projected-configmaps-9e1bbc98-7961-4d4f-ad3a-21c355b29e71": Phase="Pending", Reason="", readiness=false. Elapsed: 3.490631ms Mar 30 21:44:12.174: INFO: Pod "pod-projected-configmaps-9e1bbc98-7961-4d4f-ad3a-21c355b29e71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023727111s Mar 30 21:44:14.178: INFO: Pod "pod-projected-configmaps-9e1bbc98-7961-4d4f-ad3a-21c355b29e71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028120086s STEP: Saw pod success Mar 30 21:44:14.178: INFO: Pod "pod-projected-configmaps-9e1bbc98-7961-4d4f-ad3a-21c355b29e71" satisfied condition "success or failure" Mar 30 21:44:14.181: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-9e1bbc98-7961-4d4f-ad3a-21c355b29e71 container projected-configmap-volume-test: STEP: delete the pod Mar 30 21:44:14.199: INFO: Waiting for pod pod-projected-configmaps-9e1bbc98-7961-4d4f-ad3a-21c355b29e71 to disappear Mar 30 21:44:14.203: INFO: Pod pod-projected-configmaps-9e1bbc98-7961-4d4f-ad3a-21c355b29e71 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:44:14.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4234" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2412,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:44:14.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:44:14.291: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Mar 30 21:44:17.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6216 create -f -' Mar 30 21:44:20.335: INFO: stderr: "" Mar 30 21:44:20.336: INFO: stdout: "e2e-test-crd-publish-openapi-8118-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 30 21:44:20.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6216 delete e2e-test-crd-publish-openapi-8118-crds test-cr' Mar 30 21:44:20.446: INFO: stderr: "" Mar 30 21:44:20.446: INFO: stdout: "e2e-test-crd-publish-openapi-8118-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Mar 30 21:44:20.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6216 apply -f -' Mar 30 21:44:20.668: INFO: stderr: "" Mar 30 21:44:20.668: INFO: stdout: "e2e-test-crd-publish-openapi-8118-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Mar 30 21:44:20.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6216 delete e2e-test-crd-publish-openapi-8118-crds test-cr' Mar 30 21:44:20.776: INFO: stderr: "" Mar 30 21:44:20.776: INFO: stdout: "e2e-test-crd-publish-openapi-8118-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Mar 30 21:44:20.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8118-crds' Mar 30 21:44:21.045: INFO: stderr: "" Mar 30 21:44:21.045: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8118-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:44:23.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6216" for this suite. • [SLOW TEST:9.718 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":146,"skipped":2412,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:44:23.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Mar 30 21:44:24.014: INFO: >>> kubeConfig: /root/.kube/config Mar 30 21:44:25.909: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:44:36.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3813" for this suite. • [SLOW TEST:12.371 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":147,"skipped":2466,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:44:36.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Mar 30 21:44:44.451: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 30 21:44:44.462: INFO: Pod pod-with-poststart-exec-hook still exists Mar 30 21:44:46.462: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 30 21:44:46.466: INFO: Pod pod-with-poststart-exec-hook still exists Mar 30 21:44:48.462: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Mar 30 21:44:48.466: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:44:48.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8891" for this suite. • [SLOW TEST:12.173 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2492,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:44:48.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1692 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 30 21:44:48.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-9849' Mar 30 21:44:48.638: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 30 21:44:48.638: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Mar 30 21:44:48.661: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Mar 30 21:44:48.672: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Mar 30 21:44:48.691: INFO: scanned /root for discovery docs: Mar 30 21:44:48.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-9849' Mar 30 21:45:04.511: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 30 21:45:04.511: INFO: stdout: "Created e2e-test-httpd-rc-4adb513639c339b98ceee3c67d984679\nScaling up e2e-test-httpd-rc-4adb513639c339b98ceee3c67d984679 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-4adb513639c339b98ceee3c67d984679 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-4adb513639c339b98ceee3c67d984679 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Mar 30 21:45:04.511: INFO: stdout: "Created e2e-test-httpd-rc-4adb513639c339b98ceee3c67d984679\nScaling up e2e-test-httpd-rc-4adb513639c339b98ceee3c67d984679 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-4adb513639c339b98ceee3c67d984679 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-4adb513639c339b98ceee3c67d984679 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Mar 30 21:45:04.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-9849' Mar 30 21:45:04.601: INFO: stderr: "" Mar 30 21:45:04.601: INFO: stdout: "e2e-test-httpd-rc-4adb513639c339b98ceee3c67d984679-rfmxt " Mar 30 21:45:04.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-4adb513639c339b98ceee3c67d984679-rfmxt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9849' Mar 30 21:45:04.707: INFO: stderr: "" Mar 30 21:45:04.707: INFO: stdout: "true" Mar 30 21:45:04.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-4adb513639c339b98ceee3c67d984679-rfmxt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9849' Mar 30 21:45:04.799: INFO: stderr: "" Mar 30 21:45:04.799: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Mar 30 21:45:04.799: INFO: e2e-test-httpd-rc-4adb513639c339b98ceee3c67d984679-rfmxt is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1698 Mar 30 21:45:04.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-9849' Mar 30 21:45:04.903: INFO: stderr: "" Mar 30 21:45:04.903: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:45:04.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9849" for this suite. • [SLOW TEST:16.435 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1687 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":149,"skipped":2493,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:45:04.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-435fe3e2-17f1-4fdf-abf6-4cddc263b446 in namespace container-probe-9074 Mar 30 21:45:08.971: INFO: Started pod busybox-435fe3e2-17f1-4fdf-abf6-4cddc263b446 in namespace container-probe-9074 STEP: checking the pod's current state and verifying that restartCount is present Mar 30 21:45:08.974: INFO: Initial restart count of pod busybox-435fe3e2-17f1-4fdf-abf6-4cddc263b446 is 0 Mar 30 21:46:05.150: INFO: Restart count of pod container-probe-9074/busybox-435fe3e2-17f1-4fdf-abf6-4cddc263b446 is now 1 (56.175279506s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:46:05.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9074" for this suite. • [SLOW TEST:60.302 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2496,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:46:05.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 30 21:46:05.847: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 30 21:46:07.877: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201565, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201565, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201565, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201565, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 30 21:46:10.943: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:46:21.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7206" for this suite. STEP: Destroying namespace "webhook-7206-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.975 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":151,"skipped":2531,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:46:21.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-9c1696bb-436c-46cd-a6f2-6a6bf1720873 STEP: Creating a pod to test consume configMaps Mar 30 21:46:21.262: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0986c945-19cb-489b-80fa-9ebbf07f5985" in namespace "projected-419" to be "success or failure" Mar 30 21:46:21.264: INFO: Pod "pod-projected-configmaps-0986c945-19cb-489b-80fa-9ebbf07f5985": Phase="Pending", Reason="", readiness=false. Elapsed: 1.902743ms Mar 30 21:46:23.296: INFO: Pod "pod-projected-configmaps-0986c945-19cb-489b-80fa-9ebbf07f5985": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033399922s Mar 30 21:46:25.300: INFO: Pod "pod-projected-configmaps-0986c945-19cb-489b-80fa-9ebbf07f5985": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037829833s STEP: Saw pod success Mar 30 21:46:25.300: INFO: Pod "pod-projected-configmaps-0986c945-19cb-489b-80fa-9ebbf07f5985" satisfied condition "success or failure" Mar 30 21:46:25.303: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-0986c945-19cb-489b-80fa-9ebbf07f5985 container projected-configmap-volume-test: STEP: delete the pod Mar 30 21:46:25.351: INFO: Waiting for pod pod-projected-configmaps-0986c945-19cb-489b-80fa-9ebbf07f5985 to disappear Mar 30 21:46:25.359: INFO: Pod pod-projected-configmaps-0986c945-19cb-489b-80fa-9ebbf07f5985 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:46:25.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-419" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2532,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:46:25.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Mar 30 21:46:25.406: INFO: >>> kubeConfig: /root/.kube/config Mar 30 21:46:28.338: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:46:38.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1138" for this suite. • [SLOW TEST:13.411 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":153,"skipped":2532,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:46:38.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9950.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9950.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 30 21:46:44.901: INFO: DNS probes using dns-9950/dns-test-72474b6a-c938-46a4-87d9-a2319f863d5c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:46:44.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9950" for this suite. • [SLOW TEST:6.231 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":154,"skipped":2532,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:46:45.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-218 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 30 21:46:45.330: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 30 21:47:11.459: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.227:8080/dial?request=hostname&protocol=http&host=10.244.1.226&port=8080&tries=1'] Namespace:pod-network-test-218 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 21:47:11.459: INFO: >>> kubeConfig: /root/.kube/config I0330 21:47:11.498334 6 log.go:172] (0xc0062222c0) (0xc0028c2b40) Create stream I0330 21:47:11.498373 6 log.go:172] (0xc0062222c0) (0xc0028c2b40) Stream added, broadcasting: 1 I0330 21:47:11.500452 6 log.go:172] (0xc0062222c0) Reply frame received for 1 I0330 21:47:11.500525 6 log.go:172] (0xc0062222c0) (0xc002039680) Create stream I0330 21:47:11.500560 6 log.go:172] (0xc0062222c0) (0xc002039680) Stream added, broadcasting: 3 I0330 21:47:11.501790 6 log.go:172] (0xc0062222c0) Reply frame received for 3 I0330 21:47:11.501849 6 log.go:172] (0xc0062222c0) (0xc0027e19a0) Create stream I0330 21:47:11.501866 6 log.go:172] (0xc0062222c0) (0xc0027e19a0) Stream added, broadcasting: 5 I0330 21:47:11.502851 6 log.go:172] (0xc0062222c0) Reply frame received for 5 I0330 21:47:11.582849 6 log.go:172] (0xc0062222c0) Data frame received for 3 I0330 21:47:11.582879 6 log.go:172] (0xc002039680) (3) Data frame handling I0330 21:47:11.582914 6 log.go:172] (0xc002039680) (3) Data frame sent I0330 21:47:11.583280 6 log.go:172] (0xc0062222c0) Data frame received for 5 I0330 21:47:11.583294 6 log.go:172] (0xc0027e19a0) (5) Data frame handling I0330 21:47:11.583392 6 log.go:172] (0xc0062222c0) Data frame received for 3 I0330 21:47:11.583432 6 log.go:172] (0xc002039680) (3) Data frame handling I0330 21:47:11.585386 6 log.go:172] (0xc0062222c0) Data frame received for 1 I0330 21:47:11.585410 6 log.go:172] (0xc0028c2b40) (1) Data frame handling I0330 21:47:11.585433 6 log.go:172] (0xc0028c2b40) (1) Data frame sent I0330 21:47:11.585541 6 log.go:172] (0xc0062222c0) (0xc0028c2b40) Stream removed, broadcasting: 1 I0330 21:47:11.585618 6 log.go:172] (0xc0062222c0) (0xc0028c2b40) Stream removed, broadcasting: 1 I0330 21:47:11.585627 6 log.go:172] (0xc0062222c0) (0xc002039680) Stream removed, broadcasting: 3 I0330 21:47:11.585757 6 log.go:172] (0xc0062222c0) (0xc0027e19a0) Stream removed, broadcasting: 5 I0330 21:47:11.585814 6 log.go:172] (0xc0062222c0) Go away received Mar 30 21:47:11.585: INFO: Waiting for responses: map[] Mar 30 21:47:11.589: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.227:8080/dial?request=hostname&protocol=http&host=10.244.2.30&port=8080&tries=1'] Namespace:pod-network-test-218 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 21:47:11.589: INFO: >>> kubeConfig: /root/.kube/config I0330 21:47:11.622437 6 log.go:172] (0xc0062229a0) (0xc0028c2f00) Create stream I0330 21:47:11.622461 6 log.go:172] (0xc0062229a0) (0xc0028c2f00) Stream added, broadcasting: 1 I0330 21:47:11.624321 6 log.go:172] (0xc0062229a0) Reply frame received for 1 I0330 21:47:11.624371 6 log.go:172] (0xc0062229a0) (0xc0014f60a0) Create stream I0330 21:47:11.624391 6 log.go:172] (0xc0062229a0) (0xc0014f60a0) Stream added, broadcasting: 3 I0330 21:47:11.625269 6 log.go:172] (0xc0062229a0) Reply frame received for 3 I0330 21:47:11.625296 6 log.go:172] (0xc0062229a0) (0xc0028c32c0) Create stream I0330 21:47:11.625314 6 log.go:172] (0xc0062229a0) (0xc0028c32c0) Stream added, broadcasting: 5 I0330 21:47:11.626316 6 log.go:172] (0xc0062229a0) Reply frame received for 5 I0330 21:47:11.698994 6 log.go:172] (0xc0062229a0) Data frame received for 3 I0330 21:47:11.699018 6 log.go:172] (0xc0014f60a0) (3) Data frame handling I0330 21:47:11.699045 6 log.go:172] (0xc0014f60a0) (3) Data frame sent I0330 21:47:11.699700 6 log.go:172] (0xc0062229a0) Data frame received for 5 I0330 21:47:11.699722 6 log.go:172] (0xc0062229a0) Data frame received for 3 I0330 21:47:11.699744 6 log.go:172] (0xc0014f60a0) (3) Data frame handling I0330 21:47:11.699766 6 log.go:172] (0xc0028c32c0) (5) Data frame handling I0330 21:47:11.701068 6 log.go:172] (0xc0062229a0) Data frame received for 1 I0330 21:47:11.701083 6 log.go:172] (0xc0028c2f00) (1) Data frame handling I0330 21:47:11.701090 6 log.go:172] (0xc0028c2f00) (1) Data frame sent I0330 21:47:11.701103 6 log.go:172] (0xc0062229a0) (0xc0028c2f00) Stream removed, broadcasting: 1 I0330 21:47:11.701220 6 log.go:172] (0xc0062229a0) Go away received I0330 21:47:11.701319 6 log.go:172] (0xc0062229a0) (0xc0028c2f00) Stream removed, broadcasting: 1 I0330 21:47:11.701340 6 log.go:172] (0xc0062229a0) (0xc0014f60a0) Stream removed, broadcasting: 3 I0330 21:47:11.701350 6 log.go:172] (0xc0062229a0) (0xc0028c32c0) Stream removed, broadcasting: 5 Mar 30 21:47:11.701: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:47:11.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-218" for this suite. • [SLOW TEST:26.700 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2546,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:47:11.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-d9a7260e-f2f9-4b9a-b576-e097d4f5fd1d [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:47:11.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-20" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":156,"skipped":2568,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:47:11.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Mar 30 21:47:11.877: INFO: Waiting up to 5m0s for pod "client-containers-b94ff64c-7561-4c69-a143-ce86a6fa18d5" in namespace "containers-1884" to be "success or failure" Mar 30 21:47:11.919: INFO: Pod "client-containers-b94ff64c-7561-4c69-a143-ce86a6fa18d5": Phase="Pending", Reason="", readiness=false. Elapsed: 41.465853ms Mar 30 21:47:13.926: INFO: Pod "client-containers-b94ff64c-7561-4c69-a143-ce86a6fa18d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048344384s Mar 30 21:47:15.943: INFO: Pod "client-containers-b94ff64c-7561-4c69-a143-ce86a6fa18d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065691404s STEP: Saw pod success Mar 30 21:47:15.943: INFO: Pod "client-containers-b94ff64c-7561-4c69-a143-ce86a6fa18d5" satisfied condition "success or failure" Mar 30 21:47:15.946: INFO: Trying to get logs from node jerma-worker2 pod client-containers-b94ff64c-7561-4c69-a143-ce86a6fa18d5 container test-container: STEP: delete the pod Mar 30 21:47:15.973: INFO: Waiting for pod client-containers-b94ff64c-7561-4c69-a143-ce86a6fa18d5 to disappear Mar 30 21:47:15.984: INFO: Pod client-containers-b94ff64c-7561-4c69-a143-ce86a6fa18d5 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:47:15.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1884" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":157,"skipped":2579,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:47:15.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7330 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 30 21:47:16.027: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 30 21:47:42.199: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.229:8080/dial?request=hostname&protocol=udp&host=10.244.1.228&port=8081&tries=1'] Namespace:pod-network-test-7330 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 21:47:42.199: INFO: >>> kubeConfig: /root/.kube/config I0330 21:47:42.227668 6 log.go:172] (0xc006442630) (0xc001fe2140) Create stream I0330 21:47:42.227713 6 log.go:172] (0xc006442630) (0xc001fe2140) Stream added, broadcasting: 1 I0330 21:47:42.230118 6 log.go:172] (0xc006442630) Reply frame received for 1 I0330 21:47:42.230162 6 log.go:172] (0xc006442630) (0xc0024a6320) Create stream I0330 21:47:42.230178 6 log.go:172] (0xc006442630) (0xc0024a6320) Stream added, broadcasting: 3 I0330 21:47:42.231261 6 log.go:172] (0xc006442630) Reply frame received for 3 I0330 21:47:42.231307 6 log.go:172] (0xc006442630) (0xc0024a6460) Create stream I0330 21:47:42.231319 6 log.go:172] (0xc006442630) (0xc0024a6460) Stream added, broadcasting: 5 I0330 21:47:42.232290 6 log.go:172] (0xc006442630) Reply frame received for 5 I0330 21:47:42.320841 6 log.go:172] (0xc006442630) Data frame received for 3 I0330 21:47:42.320891 6 log.go:172] (0xc0024a6320) (3) Data frame handling I0330 21:47:42.320940 6 log.go:172] (0xc0024a6320) (3) Data frame sent I0330 21:47:42.321056 6 log.go:172] (0xc006442630) Data frame received for 3 I0330 21:47:42.321081 6 log.go:172] (0xc0024a6320) (3) Data frame handling I0330 21:47:42.322032 6 log.go:172] (0xc006442630) Data frame received for 5 I0330 21:47:42.322067 6 log.go:172] (0xc0024a6460) (5) Data frame handling I0330 21:47:42.323097 6 log.go:172] (0xc006442630) Data frame received for 1 I0330 21:47:42.323129 6 log.go:172] (0xc001fe2140) (1) Data frame handling I0330 21:47:42.323150 6 log.go:172] (0xc001fe2140) (1) Data frame sent I0330 21:47:42.323173 6 log.go:172] (0xc006442630) (0xc001fe2140) Stream removed, broadcasting: 1 I0330 21:47:42.323197 6 log.go:172] (0xc006442630) Go away received I0330 21:47:42.323502 6 log.go:172] (0xc006442630) (0xc001fe2140) Stream removed, broadcasting: 1 I0330 21:47:42.323524 6 log.go:172] (0xc006442630) (0xc0024a6320) Stream removed, broadcasting: 3 I0330 21:47:42.323534 6 log.go:172] (0xc006442630) (0xc0024a6460) Stream removed, broadcasting: 5 Mar 30 21:47:42.323: INFO: Waiting for responses: map[] Mar 30 21:47:42.327: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.229:8080/dial?request=hostname&protocol=udp&host=10.244.2.32&port=8081&tries=1'] Namespace:pod-network-test-7330 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 21:47:42.327: INFO: >>> kubeConfig: /root/.kube/config I0330 21:47:42.360764 6 log.go:172] (0xc0064f64d0) (0xc0024a6780) Create stream I0330 21:47:42.360921 6 log.go:172] (0xc0064f64d0) (0xc0024a6780) Stream added, broadcasting: 1 I0330 21:47:42.365425 6 log.go:172] (0xc0064f64d0) Reply frame received for 1 I0330 21:47:42.365468 6 log.go:172] (0xc0064f64d0) (0xc0024a68c0) Create stream I0330 21:47:42.365478 6 log.go:172] (0xc0064f64d0) (0xc0024a68c0) Stream added, broadcasting: 3 I0330 21:47:42.366453 6 log.go:172] (0xc0064f64d0) Reply frame received for 3 I0330 21:47:42.366482 6 log.go:172] (0xc0064f64d0) (0xc0024a6960) Create stream I0330 21:47:42.366490 6 log.go:172] (0xc0064f64d0) (0xc0024a6960) Stream added, broadcasting: 5 I0330 21:47:42.367318 6 log.go:172] (0xc0064f64d0) Reply frame received for 5 I0330 21:47:42.425780 6 log.go:172] (0xc0064f64d0) Data frame received for 3 I0330 21:47:42.425821 6 log.go:172] (0xc0024a68c0) (3) Data frame handling I0330 21:47:42.425855 6 log.go:172] (0xc0024a68c0) (3) Data frame sent I0330 21:47:42.425891 6 log.go:172] (0xc0064f64d0) Data frame received for 3 I0330 21:47:42.425947 6 log.go:172] (0xc0024a68c0) (3) Data frame handling I0330 21:47:42.426063 6 log.go:172] (0xc0064f64d0) Data frame received for 5 I0330 21:47:42.426094 6 log.go:172] (0xc0024a6960) (5) Data frame handling I0330 21:47:42.427609 6 log.go:172] (0xc0064f64d0) Data frame received for 1 I0330 21:47:42.427628 6 log.go:172] (0xc0024a6780) (1) Data frame handling I0330 21:47:42.427639 6 log.go:172] (0xc0024a6780) (1) Data frame sent I0330 21:47:42.427652 6 log.go:172] (0xc0064f64d0) (0xc0024a6780) Stream removed, broadcasting: 1 I0330 21:47:42.427668 6 log.go:172] (0xc0064f64d0) Go away received I0330 21:47:42.427761 6 log.go:172] (0xc0064f64d0) (0xc0024a6780) Stream removed, broadcasting: 1 I0330 21:47:42.427782 6 log.go:172] (0xc0064f64d0) (0xc0024a68c0) Stream removed, broadcasting: 3 I0330 21:47:42.427795 6 log.go:172] (0xc0064f64d0) (0xc0024a6960) Stream removed, broadcasting: 5 Mar 30 21:47:42.427: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:47:42.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7330" for this suite. • [SLOW TEST:26.445 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2618,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:47:42.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1596 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 30 21:47:42.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-370' Mar 30 21:47:42.629: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 30 21:47:42.629: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1602 Mar 30 21:47:44.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-370' Mar 30 21:47:44.761: INFO: stderr: "" Mar 30 21:47:44.761: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:47:44.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-370" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":159,"skipped":2640,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:47:44.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-tzt9 STEP: Creating a pod to test atomic-volume-subpath Mar 30 21:47:44.891: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-tzt9" in namespace "subpath-5854" to be "success or failure" Mar 30 21:47:44.894: INFO: Pod "pod-subpath-test-projected-tzt9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.840975ms Mar 30 21:47:46.898: INFO: Pod "pod-subpath-test-projected-tzt9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007601516s Mar 30 21:47:48.902: INFO: Pod "pod-subpath-test-projected-tzt9": Phase="Running", Reason="", readiness=true. Elapsed: 4.01185514s Mar 30 21:47:50.906: INFO: Pod "pod-subpath-test-projected-tzt9": Phase="Running", Reason="", readiness=true. Elapsed: 6.015878298s Mar 30 21:47:52.910: INFO: Pod "pod-subpath-test-projected-tzt9": Phase="Running", Reason="", readiness=true. Elapsed: 8.019821424s Mar 30 21:47:54.914: INFO: Pod "pod-subpath-test-projected-tzt9": Phase="Running", Reason="", readiness=true. Elapsed: 10.02386026s Mar 30 21:47:56.926: INFO: Pod "pod-subpath-test-projected-tzt9": Phase="Running", Reason="", readiness=true. Elapsed: 12.035543276s Mar 30 21:47:58.930: INFO: Pod "pod-subpath-test-projected-tzt9": Phase="Running", Reason="", readiness=true. Elapsed: 14.039339497s Mar 30 21:48:00.934: INFO: Pod "pod-subpath-test-projected-tzt9": Phase="Running", Reason="", readiness=true. Elapsed: 16.043340737s Mar 30 21:48:02.938: INFO: Pod "pod-subpath-test-projected-tzt9": Phase="Running", Reason="", readiness=true. Elapsed: 18.047448848s Mar 30 21:48:04.941: INFO: Pod "pod-subpath-test-projected-tzt9": Phase="Running", Reason="", readiness=true. Elapsed: 20.05058825s Mar 30 21:48:06.964: INFO: Pod "pod-subpath-test-projected-tzt9": Phase="Running", Reason="", readiness=true. Elapsed: 22.073760839s Mar 30 21:48:08.969: INFO: Pod "pod-subpath-test-projected-tzt9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.07837203s STEP: Saw pod success Mar 30 21:48:08.969: INFO: Pod "pod-subpath-test-projected-tzt9" satisfied condition "success or failure" Mar 30 21:48:08.972: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-projected-tzt9 container test-container-subpath-projected-tzt9: STEP: delete the pod Mar 30 21:48:09.018: INFO: Waiting for pod pod-subpath-test-projected-tzt9 to disappear Mar 30 21:48:09.024: INFO: Pod pod-subpath-test-projected-tzt9 no longer exists STEP: Deleting pod pod-subpath-test-projected-tzt9 Mar 30 21:48:09.024: INFO: Deleting pod "pod-subpath-test-projected-tzt9" in namespace "subpath-5854" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:48:09.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5854" for this suite. • [SLOW TEST:24.263 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":160,"skipped":2651,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:48:09.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Mar 30 21:48:09.116: INFO: Waiting up to 5m0s for pod "client-containers-f8c9b575-90f9-48d5-89a9-b52101944289" in namespace "containers-5034" to be "success or failure" Mar 30 21:48:09.126: INFO: Pod "client-containers-f8c9b575-90f9-48d5-89a9-b52101944289": Phase="Pending", Reason="", readiness=false. Elapsed: 9.967536ms Mar 30 21:48:11.130: INFO: Pod "client-containers-f8c9b575-90f9-48d5-89a9-b52101944289": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013977582s Mar 30 21:48:13.165: INFO: Pod "client-containers-f8c9b575-90f9-48d5-89a9-b52101944289": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049688585s STEP: Saw pod success Mar 30 21:48:13.166: INFO: Pod "client-containers-f8c9b575-90f9-48d5-89a9-b52101944289" satisfied condition "success or failure" Mar 30 21:48:13.168: INFO: Trying to get logs from node jerma-worker pod client-containers-f8c9b575-90f9-48d5-89a9-b52101944289 container test-container: STEP: delete the pod Mar 30 21:48:13.223: INFO: Waiting for pod client-containers-f8c9b575-90f9-48d5-89a9-b52101944289 to disappear Mar 30 21:48:13.236: INFO: Pod client-containers-f8c9b575-90f9-48d5-89a9-b52101944289 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:48:13.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5034" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2656,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:48:13.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-6774/configmap-test-f8daf71b-114b-4a97-a318-777e82f479af STEP: Creating a pod to test consume configMaps Mar 30 21:48:13.334: INFO: Waiting up to 5m0s for pod "pod-configmaps-6b973dc1-9d63-4ad4-9d06-bab2dc7a30ca" in namespace "configmap-6774" to be "success or failure" Mar 30 21:48:13.340: INFO: Pod "pod-configmaps-6b973dc1-9d63-4ad4-9d06-bab2dc7a30ca": Phase="Pending", Reason="", readiness=false. Elapsed: 5.300624ms Mar 30 21:48:15.344: INFO: Pod "pod-configmaps-6b973dc1-9d63-4ad4-9d06-bab2dc7a30ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009695719s Mar 30 21:48:17.348: INFO: Pod "pod-configmaps-6b973dc1-9d63-4ad4-9d06-bab2dc7a30ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013671055s STEP: Saw pod success Mar 30 21:48:17.348: INFO: Pod "pod-configmaps-6b973dc1-9d63-4ad4-9d06-bab2dc7a30ca" satisfied condition "success or failure" Mar 30 21:48:17.351: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-6b973dc1-9d63-4ad4-9d06-bab2dc7a30ca container env-test: STEP: delete the pod Mar 30 21:48:17.369: INFO: Waiting for pod pod-configmaps-6b973dc1-9d63-4ad4-9d06-bab2dc7a30ca to disappear Mar 30 21:48:17.374: INFO: Pod pod-configmaps-6b973dc1-9d63-4ad4-9d06-bab2dc7a30ca no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:48:17.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6774" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2695,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:48:17.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Mar 30 21:48:17.973: INFO: created pod pod-service-account-defaultsa Mar 30 21:48:17.973: INFO: pod pod-service-account-defaultsa service account token volume mount: true Mar 30 21:48:17.990: INFO: created pod pod-service-account-mountsa Mar 30 21:48:17.990: INFO: pod pod-service-account-mountsa service account token volume mount: true Mar 30 21:48:18.004: INFO: created pod pod-service-account-nomountsa Mar 30 21:48:18.004: INFO: pod pod-service-account-nomountsa service account token volume mount: false Mar 30 21:48:18.059: INFO: created pod pod-service-account-defaultsa-mountspec Mar 30 21:48:18.059: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Mar 30 21:48:18.094: INFO: created pod pod-service-account-mountsa-mountspec Mar 30 21:48:18.094: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Mar 30 21:48:18.111: INFO: created pod pod-service-account-nomountsa-mountspec Mar 30 21:48:18.111: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Mar 30 21:48:18.135: INFO: created pod pod-service-account-defaultsa-nomountspec Mar 30 21:48:18.136: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Mar 30 21:48:18.196: INFO: created pod pod-service-account-mountsa-nomountspec Mar 30 21:48:18.196: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Mar 30 21:48:18.219: INFO: created pod pod-service-account-nomountsa-nomountspec Mar 30 21:48:18.219: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:48:18.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7691" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":163,"skipped":2702,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:48:18.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:48:29.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6563" for this suite. • [SLOW TEST:11.468 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":164,"skipped":2765,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:48:29.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-2a95d84a-c324-408e-a8f2-0c1d40b282f7 STEP: Creating secret with name s-test-opt-upd-3fb8874b-931d-4fdd-812d-5a48a9328d30 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2a95d84a-c324-408e-a8f2-0c1d40b282f7 STEP: Updating secret s-test-opt-upd-3fb8874b-931d-4fdd-812d-5a48a9328d30 STEP: Creating secret with name s-test-opt-create-d212c05e-071a-4df1-9181-38a44307e543 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:49:40.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9937" for this suite. • [SLOW TEST:70.471 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2802,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:49:40.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Mar 30 21:49:40.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Mar 30 21:49:40.565: INFO: stderr: "" Mar 30 21:49:40.565: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:49:40.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4935" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":166,"skipped":2817,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:49:40.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-16a678b1-9d74-48b3-ab66-8717a48384a8 STEP: Creating a pod to test consume secrets Mar 30 21:49:40.696: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b5e751cc-ece4-4698-b033-b42ff5ee86ec" in namespace "projected-6894" to be "success or failure" Mar 30 21:49:40.703: INFO: Pod "pod-projected-secrets-b5e751cc-ece4-4698-b033-b42ff5ee86ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.837916ms Mar 30 21:49:42.707: INFO: Pod "pod-projected-secrets-b5e751cc-ece4-4698-b033-b42ff5ee86ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010789081s Mar 30 21:49:44.711: INFO: Pod "pod-projected-secrets-b5e751cc-ece4-4698-b033-b42ff5ee86ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014625059s STEP: Saw pod success Mar 30 21:49:44.711: INFO: Pod "pod-projected-secrets-b5e751cc-ece4-4698-b033-b42ff5ee86ec" satisfied condition "success or failure" Mar 30 21:49:44.714: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-b5e751cc-ece4-4698-b033-b42ff5ee86ec container projected-secret-volume-test: STEP: delete the pod Mar 30 21:49:44.763: INFO: Waiting for pod pod-projected-secrets-b5e751cc-ece4-4698-b033-b42ff5ee86ec to disappear Mar 30 21:49:44.790: INFO: Pod pod-projected-secrets-b5e751cc-ece4-4698-b033-b42ff5ee86ec no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:49:44.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6894" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2846,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:49:44.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:49:44.841: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Mar 30 21:49:47.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3869 create -f -' Mar 30 21:49:50.736: INFO: stderr: "" Mar 30 21:49:50.736: INFO: stdout: "e2e-test-crd-publish-openapi-9740-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 30 21:49:50.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3869 delete e2e-test-crd-publish-openapi-9740-crds test-foo' Mar 30 21:49:50.853: INFO: stderr: "" Mar 30 21:49:50.853: INFO: stdout: "e2e-test-crd-publish-openapi-9740-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Mar 30 21:49:50.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3869 apply -f -' Mar 30 21:49:51.176: INFO: stderr: "" Mar 30 21:49:51.176: INFO: stdout: "e2e-test-crd-publish-openapi-9740-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Mar 30 21:49:51.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3869 delete e2e-test-crd-publish-openapi-9740-crds test-foo' Mar 30 21:49:51.308: INFO: stderr: "" Mar 30 21:49:51.308: INFO: stdout: "e2e-test-crd-publish-openapi-9740-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Mar 30 21:49:51.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3869 create -f -' Mar 30 21:49:51.522: INFO: rc: 1 Mar 30 21:49:51.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3869 apply -f -' Mar 30 21:49:51.729: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Mar 30 21:49:51.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3869 create -f -' Mar 30 21:49:51.971: INFO: rc: 1 Mar 30 21:49:51.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3869 apply -f -' Mar 30 21:49:52.218: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Mar 30 21:49:52.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9740-crds' Mar 30 21:49:52.445: INFO: stderr: "" Mar 30 21:49:52.445: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9740-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Mar 30 21:49:52.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9740-crds.metadata' Mar 30 21:49:52.675: INFO: stderr: "" Mar 30 21:49:52.675: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9740-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Mar 30 21:49:52.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9740-crds.spec' Mar 30 21:49:52.937: INFO: stderr: "" Mar 30 21:49:52.937: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9740-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Mar 30 21:49:52.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9740-crds.spec.bars' Mar 30 21:49:53.172: INFO: stderr: "" Mar 30 21:49:53.172: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9740-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Mar 30 21:49:53.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9740-crds.spec.bars2' Mar 30 21:49:53.398: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:49:56.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3869" for this suite. • [SLOW TEST:11.462 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":168,"skipped":2866,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:49:56.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 30 21:49:56.651: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 30 21:49:58.660: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201796, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201796, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201796, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721201796, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 30 21:50:01.692: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:50:01.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6844" for this suite. STEP: Destroying namespace "webhook-6844-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.553 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":169,"skipped":2912,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:50:01.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:50:01.898: INFO: Creating ReplicaSet my-hostname-basic-c4f70590-5a06-4f41-8777-1bb8ee486ad4 Mar 30 21:50:01.924: INFO: Pod name my-hostname-basic-c4f70590-5a06-4f41-8777-1bb8ee486ad4: Found 0 pods out of 1 Mar 30 21:50:06.934: INFO: Pod name my-hostname-basic-c4f70590-5a06-4f41-8777-1bb8ee486ad4: Found 1 pods out of 1 Mar 30 21:50:06.934: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-c4f70590-5a06-4f41-8777-1bb8ee486ad4" is running Mar 30 21:50:06.940: INFO: Pod "my-hostname-basic-c4f70590-5a06-4f41-8777-1bb8ee486ad4-8hrw9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-30 21:50:01 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-30 21:50:04 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-30 21:50:04 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-30 21:50:01 +0000 UTC Reason: Message:}]) Mar 30 21:50:06.940: INFO: Trying to dial the pod Mar 30 21:50:11.952: INFO: Controller my-hostname-basic-c4f70590-5a06-4f41-8777-1bb8ee486ad4: Got expected result from replica 1 [my-hostname-basic-c4f70590-5a06-4f41-8777-1bb8ee486ad4-8hrw9]: "my-hostname-basic-c4f70590-5a06-4f41-8777-1bb8ee486ad4-8hrw9", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:50:11.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9795" for this suite. • [SLOW TEST:10.146 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":170,"skipped":2913,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:50:11.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:50:19.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5712" for this suite. • [SLOW TEST:7.088 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":171,"skipped":2929,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:50:19.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:50:49.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6055" for this suite. • [SLOW TEST:30.662 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2933,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:50:49.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:51:13.879: INFO: Container started at 2020-03-30 21:50:52 +0000 UTC, pod became ready at 2020-03-30 21:51:13 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:51:13.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9150" for this suite. • [SLOW TEST:24.173 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2942,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:51:13.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0330 21:51:44.486065 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 30 21:51:44.486: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:51:44.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2665" for this suite. • [SLOW TEST:30.606 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":174,"skipped":2947,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:51:44.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5116 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-5116 I0330 21:51:44.641756 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-5116, replica count: 2 I0330 21:51:47.692367 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0330 21:51:50.692704 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 30 21:51:50.692: INFO: Creating new exec pod Mar 30 21:51:55.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5116 execpoddx86n -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Mar 30 21:51:55.916: INFO: stderr: "I0330 21:51:55.838912 2166 log.go:172] (0xc000018630) (0xc0005fc000) Create stream\nI0330 21:51:55.838994 2166 log.go:172] (0xc000018630) (0xc0005fc000) Stream added, broadcasting: 1\nI0330 21:51:55.841843 2166 log.go:172] (0xc000018630) Reply frame received for 1\nI0330 21:51:55.841891 2166 log.go:172] (0xc000018630) (0xc0005fc0a0) Create stream\nI0330 21:51:55.841905 2166 log.go:172] (0xc000018630) (0xc0005fc0a0) Stream added, broadcasting: 3\nI0330 21:51:55.842894 2166 log.go:172] (0xc000018630) Reply frame received for 3\nI0330 21:51:55.842934 2166 log.go:172] (0xc000018630) (0xc0005fc1e0) Create stream\nI0330 21:51:55.842947 2166 log.go:172] (0xc000018630) (0xc0005fc1e0) Stream added, broadcasting: 5\nI0330 21:51:55.843954 2166 log.go:172] (0xc000018630) Reply frame received for 5\nI0330 21:51:55.909659 2166 log.go:172] (0xc000018630) Data frame received for 5\nI0330 21:51:55.909690 2166 log.go:172] (0xc0005fc1e0) (5) Data frame handling\nI0330 21:51:55.909712 2166 log.go:172] (0xc0005fc1e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0330 21:51:55.910309 2166 log.go:172] (0xc000018630) Data frame received for 5\nI0330 21:51:55.910340 2166 log.go:172] (0xc0005fc1e0) (5) Data frame handling\nI0330 21:51:55.910374 2166 log.go:172] (0xc0005fc1e0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0330 21:51:55.910778 2166 log.go:172] (0xc000018630) Data frame received for 5\nI0330 21:51:55.910830 2166 log.go:172] (0xc0005fc1e0) (5) Data frame handling\nI0330 21:51:55.910870 2166 log.go:172] (0xc000018630) Data frame received for 3\nI0330 21:51:55.910894 2166 log.go:172] (0xc0005fc0a0) (3) Data frame handling\nI0330 21:51:55.912599 2166 log.go:172] (0xc000018630) Data frame received for 1\nI0330 21:51:55.912619 2166 log.go:172] (0xc0005fc000) (1) Data frame handling\nI0330 21:51:55.912634 2166 log.go:172] (0xc0005fc000) (1) Data frame sent\nI0330 21:51:55.912647 2166 log.go:172] (0xc000018630) (0xc0005fc000) Stream removed, broadcasting: 1\nI0330 21:51:55.912739 2166 log.go:172] (0xc000018630) Go away received\nI0330 21:51:55.913374 2166 log.go:172] (0xc000018630) (0xc0005fc000) Stream removed, broadcasting: 1\nI0330 21:51:55.913415 2166 log.go:172] (0xc000018630) (0xc0005fc0a0) Stream removed, broadcasting: 3\nI0330 21:51:55.913437 2166 log.go:172] (0xc000018630) (0xc0005fc1e0) Stream removed, broadcasting: 5\n" Mar 30 21:51:55.916: INFO: stdout: "" Mar 30 21:51:55.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5116 execpoddx86n -- /bin/sh -x -c nc -zv -t -w 2 10.106.70.134 80' Mar 30 21:51:56.110: INFO: stderr: "I0330 21:51:56.046538 2188 log.go:172] (0xc000504a50) (0xc0006b9a40) Create stream\nI0330 21:51:56.046590 2188 log.go:172] (0xc000504a50) (0xc0006b9a40) Stream added, broadcasting: 1\nI0330 21:51:56.048928 2188 log.go:172] (0xc000504a50) Reply frame received for 1\nI0330 21:51:56.048973 2188 log.go:172] (0xc000504a50) (0xc000a68000) Create stream\nI0330 21:51:56.048985 2188 log.go:172] (0xc000504a50) (0xc000a68000) Stream added, broadcasting: 3\nI0330 21:51:56.050063 2188 log.go:172] (0xc000504a50) Reply frame received for 3\nI0330 21:51:56.050094 2188 log.go:172] (0xc000504a50) (0xc0006b9c20) Create stream\nI0330 21:51:56.050103 2188 log.go:172] (0xc000504a50) (0xc0006b9c20) Stream added, broadcasting: 5\nI0330 21:51:56.051032 2188 log.go:172] (0xc000504a50) Reply frame received for 5\nI0330 21:51:56.104494 2188 log.go:172] (0xc000504a50) Data frame received for 3\nI0330 21:51:56.104551 2188 log.go:172] (0xc000a68000) (3) Data frame handling\nI0330 21:51:56.104592 2188 log.go:172] (0xc000504a50) Data frame received for 5\nI0330 21:51:56.104615 2188 log.go:172] (0xc0006b9c20) (5) Data frame handling\nI0330 21:51:56.104645 2188 log.go:172] (0xc0006b9c20) (5) Data frame sent\nI0330 21:51:56.104661 2188 log.go:172] (0xc000504a50) Data frame received for 5\nI0330 21:51:56.104674 2188 log.go:172] (0xc0006b9c20) (5) Data frame handling\n+ nc -zv -t -w 2 10.106.70.134 80\nConnection to 10.106.70.134 80 port [tcp/http] succeeded!\nI0330 21:51:56.106050 2188 log.go:172] (0xc000504a50) Data frame received for 1\nI0330 21:51:56.106080 2188 log.go:172] (0xc0006b9a40) (1) Data frame handling\nI0330 21:51:56.106092 2188 log.go:172] (0xc0006b9a40) (1) Data frame sent\nI0330 21:51:56.106106 2188 log.go:172] (0xc000504a50) (0xc0006b9a40) Stream removed, broadcasting: 1\nI0330 21:51:56.106124 2188 log.go:172] (0xc000504a50) Go away received\nI0330 21:51:56.106516 2188 log.go:172] (0xc000504a50) (0xc0006b9a40) Stream removed, broadcasting: 1\nI0330 21:51:56.106541 2188 log.go:172] (0xc000504a50) (0xc000a68000) Stream removed, broadcasting: 3\nI0330 21:51:56.106563 2188 log.go:172] (0xc000504a50) (0xc0006b9c20) Stream removed, broadcasting: 5\n" Mar 30 21:51:56.111: INFO: stdout: "" Mar 30 21:51:56.111: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:51:56.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5116" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.650 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":175,"skipped":2962,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:51:56.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:52:00.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1722" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2998,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:52:00.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1967, will wait for the garbage collector to delete the pods Mar 30 21:52:06.372: INFO: Deleting Job.batch foo took: 6.310339ms Mar 30 21:52:06.672: INFO: Terminating Job.batch foo pods took: 300.245492ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:52:49.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1967" for this suite. • [SLOW TEST:49.322 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":177,"skipped":3000,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:52:49.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-66fbe4ee-6976-4f99-bfea-8a150317a087 in namespace container-probe-9066 Mar 30 21:52:53.686: INFO: Started pod liveness-66fbe4ee-6976-4f99-bfea-8a150317a087 in namespace container-probe-9066 STEP: checking the pod's current state and verifying that restartCount is present Mar 30 21:52:53.689: INFO: Initial restart count of pod liveness-66fbe4ee-6976-4f99-bfea-8a150317a087 is 0 Mar 30 21:53:13.741: INFO: Restart count of pod container-probe-9066/liveness-66fbe4ee-6976-4f99-bfea-8a150317a087 is now 1 (20.051569089s elapsed) Mar 30 21:53:33.784: INFO: Restart count of pod container-probe-9066/liveness-66fbe4ee-6976-4f99-bfea-8a150317a087 is now 2 (40.09408014s elapsed) Mar 30 21:53:53.822: INFO: Restart count of pod container-probe-9066/liveness-66fbe4ee-6976-4f99-bfea-8a150317a087 is now 3 (1m0.132595502s elapsed) Mar 30 21:54:13.867: INFO: Restart count of pod container-probe-9066/liveness-66fbe4ee-6976-4f99-bfea-8a150317a087 is now 4 (1m20.177618396s elapsed) Mar 30 21:55:13.993: INFO: Restart count of pod container-probe-9066/liveness-66fbe4ee-6976-4f99-bfea-8a150317a087 is now 5 (2m20.303056787s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:55:14.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9066" for this suite. • [SLOW TEST:144.463 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":3012,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:55:14.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1632 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Mar 30 21:55:14.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-8543' Mar 30 21:55:14.183: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Mar 30 21:55:14.183: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Mar 30 21:55:14.400: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-s7298] Mar 30 21:55:14.400: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-s7298" in namespace "kubectl-8543" to be "running and ready" Mar 30 21:55:14.419: INFO: Pod "e2e-test-httpd-rc-s7298": Phase="Pending", Reason="", readiness=false. Elapsed: 19.272785ms Mar 30 21:55:16.435: INFO: Pod "e2e-test-httpd-rc-s7298": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034457851s Mar 30 21:55:18.439: INFO: Pod "e2e-test-httpd-rc-s7298": Phase="Running", Reason="", readiness=true. Elapsed: 4.038791777s Mar 30 21:55:18.439: INFO: Pod "e2e-test-httpd-rc-s7298" satisfied condition "running and ready" Mar 30 21:55:18.439: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-s7298] Mar 30 21:55:18.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-8543' Mar 30 21:55:18.574: INFO: stderr: "" Mar 30 21:55:18.574: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.47. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.47. Set the 'ServerName' directive globally to suppress this message\n[Mon Mar 30 21:55:16.592119 2020] [mpm_event:notice] [pid 1:tid 139800054008680] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Mon Mar 30 21:55:16.592168 2020] [core:notice] [pid 1:tid 139800054008680] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1637 Mar 30 21:55:18.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-8543' Mar 30 21:55:18.697: INFO: stderr: "" Mar 30 21:55:18.697: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:55:18.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8543" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":179,"skipped":3032,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:55:18.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:55:18.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4955" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":180,"skipped":3047,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:55:18.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 21:55:18.956: INFO: Pod name rollover-pod: Found 0 pods out of 1 Mar 30 21:55:23.973: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 30 21:55:23.973: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Mar 30 21:55:25.977: INFO: Creating deployment "test-rollover-deployment" Mar 30 21:55:26.000: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Mar 30 21:55:28.006: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Mar 30 21:55:28.013: INFO: Ensure that both replica sets have 1 created replica Mar 30 21:55:28.020: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Mar 30 21:55:28.026: INFO: Updating deployment test-rollover-deployment Mar 30 21:55:28.026: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Mar 30 21:55:30.038: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Mar 30 21:55:30.044: INFO: Make sure deployment "test-rollover-deployment" is complete Mar 30 21:55:30.050: INFO: all replica sets need to contain the pod-template-hash label Mar 30 21:55:30.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202126, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202126, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202128, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202126, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 21:55:32.058: INFO: all replica sets need to contain the pod-template-hash label Mar 30 21:55:32.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202126, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202126, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202130, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202126, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 21:55:34.059: INFO: all replica sets need to contain the pod-template-hash label Mar 30 21:55:34.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202126, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202126, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202130, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202126, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 21:55:36.059: INFO: all replica sets need to contain the pod-template-hash label Mar 30 21:55:36.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202126, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202126, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202130, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202126, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 21:55:38.058: INFO: all replica sets need to contain the pod-template-hash label Mar 30 21:55:38.058: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202126, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202126, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202130, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202126, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 21:55:40.058: INFO: all replica sets need to contain the pod-template-hash label Mar 30 21:55:40.058: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202126, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202126, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202130, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202126, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 21:55:42.057: INFO: Mar 30 21:55:42.058: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 30 21:55:42.066: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4883 /apis/apps/v1/namespaces/deployment-4883/deployments/test-rollover-deployment ef753715-a678-4438-9b98-bfcbf3b9f2f3 4064425 2 2020-03-30 21:55:25 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00544cf18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-03-30 21:55:26 +0000 UTC,LastTransitionTime:2020-03-30 21:55:26 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-03-30 21:55:41 +0000 UTC,LastTransitionTime:2020-03-30 21:55:26 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Mar 30 21:55:42.070: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-4883 /apis/apps/v1/namespaces/deployment-4883/replicasets/test-rollover-deployment-574d6dfbff f4de9096-a900-4e4e-8b02-339b2bad0580 4064414 2 2020-03-30 21:55:28 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment ef753715-a678-4438-9b98-bfcbf3b9f2f3 0xc00544d387 0xc00544d388}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00544d3f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 30 21:55:42.070: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Mar 30 21:55:42.070: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4883 /apis/apps/v1/namespaces/deployment-4883/replicasets/test-rollover-controller d363ddf5-ffab-4f01-b4ac-dcb742d4d774 4064423 2 2020-03-30 21:55:18 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment ef753715-a678-4438-9b98-bfcbf3b9f2f3 0xc00544d2b7 0xc00544d2b8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00544d318 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 30 21:55:42.070: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-4883 /apis/apps/v1/namespaces/deployment-4883/replicasets/test-rollover-deployment-f6c94f66c 70cacec3-f2f6-494a-aa82-e42f8c663f79 4064364 2 2020-03-30 21:55:26 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment ef753715-a678-4438-9b98-bfcbf3b9f2f3 0xc00544d460 0xc00544d461}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00544d4f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 30 21:55:42.074: INFO: Pod "test-rollover-deployment-574d6dfbff-5wxt6" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-5wxt6 test-rollover-deployment-574d6dfbff- deployment-4883 /api/v1/namespaces/deployment-4883/pods/test-rollover-deployment-574d6dfbff-5wxt6 914b0ac3-9ad8-4116-9020-8538780b0a9c 4064381 0 2020-03-30 21:55:28 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff f4de9096-a900-4e4e-8b02-339b2bad0580 0xc00544da97 0xc00544da98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-s8clp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-s8clp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-s8clp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:55:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:55:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:55:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 21:55:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.249,StartTime:2020-03-30 21:55:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-30 21:55:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://a5d7155a04822d6b3c055445ba355759a03f15580b9f28852221104524f24c55,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.249,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:55:42.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4883" for this suite. • [SLOW TEST:23.268 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":181,"skipped":3057,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:55:42.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 30 21:55:42.163: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53ca448d-beca-4eb1-97c4-ea33c69192d2" in namespace "projected-4519" to be "success or failure" Mar 30 21:55:42.166: INFO: Pod "downwardapi-volume-53ca448d-beca-4eb1-97c4-ea33c69192d2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.279567ms Mar 30 21:55:44.170: INFO: Pod "downwardapi-volume-53ca448d-beca-4eb1-97c4-ea33c69192d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007388974s Mar 30 21:55:46.175: INFO: Pod "downwardapi-volume-53ca448d-beca-4eb1-97c4-ea33c69192d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011734122s STEP: Saw pod success Mar 30 21:55:46.175: INFO: Pod "downwardapi-volume-53ca448d-beca-4eb1-97c4-ea33c69192d2" satisfied condition "success or failure" Mar 30 21:55:46.178: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-53ca448d-beca-4eb1-97c4-ea33c69192d2 container client-container: STEP: delete the pod Mar 30 21:55:46.211: INFO: Waiting for pod downwardapi-volume-53ca448d-beca-4eb1-97c4-ea33c69192d2 to disappear Mar 30 21:55:46.243: INFO: Pod downwardapi-volume-53ca448d-beca-4eb1-97c4-ea33c69192d2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:55:46.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4519" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":3068,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:55:46.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-4852 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4852 to expose endpoints map[] Mar 30 21:55:46.390: INFO: Get endpoints failed (2.998268ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Mar 30 21:55:47.395: INFO: successfully validated that service endpoint-test2 in namespace services-4852 exposes endpoints map[] (1.007835095s elapsed) STEP: Creating pod pod1 in namespace services-4852 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4852 to expose endpoints map[pod1:[80]] Mar 30 21:55:50.548: INFO: successfully validated that service endpoint-test2 in namespace services-4852 exposes endpoints map[pod1:[80]] (3.116686775s elapsed) STEP: Creating pod pod2 in namespace services-4852 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4852 to expose endpoints map[pod1:[80] pod2:[80]] Mar 30 21:55:53.647: INFO: successfully validated that service endpoint-test2 in namespace services-4852 exposes endpoints map[pod1:[80] pod2:[80]] (3.095626188s elapsed) STEP: Deleting pod pod1 in namespace services-4852 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4852 to expose endpoints map[pod2:[80]] Mar 30 21:55:54.706: INFO: successfully validated that service endpoint-test2 in namespace services-4852 exposes endpoints map[pod2:[80]] (1.054228532s elapsed) STEP: Deleting pod pod2 in namespace services-4852 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4852 to expose endpoints map[] Mar 30 21:55:55.750: INFO: successfully validated that service endpoint-test2 in namespace services-4852 exposes endpoints map[] (1.039509666s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:55:55.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4852" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.550 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":183,"skipped":3086,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:55:55.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Mar 30 21:55:59.904: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 21:55:59.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4969" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":3104,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 21:55:59.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-c581bc1a-b687-4f9d-89b5-96813c3371dd in namespace container-probe-92 Mar 30 21:56:04.011: INFO: Started pod busybox-c581bc1a-b687-4f9d-89b5-96813c3371dd in namespace container-probe-92 STEP: checking the pod's current state and verifying that restartCount is present Mar 30 21:56:04.014: INFO: Initial restart count of pod busybox-c581bc1a-b687-4f9d-89b5-96813c3371dd is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:00:04.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-92" for this suite. • [SLOW TEST:244.746 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3148,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:00:04.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:00:04.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5554" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":186,"skipped":3164,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:00:04.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1382 STEP: creating the pod Mar 30 22:00:04.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7763' Mar 30 22:00:08.082: INFO: stderr: "" Mar 30 22:00:08.082: INFO: stdout: "pod/pause created\n" Mar 30 22:00:08.082: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Mar 30 22:00:08.082: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7763" to be "running and ready" Mar 30 22:00:08.106: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 23.77808ms Mar 30 22:00:10.254: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172609556s Mar 30 22:00:12.259: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.176760485s Mar 30 22:00:12.259: INFO: Pod "pause" satisfied condition "running and ready" Mar 30 22:00:12.259: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Mar 30 22:00:12.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7763' Mar 30 22:00:12.373: INFO: stderr: "" Mar 30 22:00:12.373: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Mar 30 22:00:12.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7763' Mar 30 22:00:12.465: INFO: stderr: "" Mar 30 22:00:12.465: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Mar 30 22:00:12.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7763' Mar 30 22:00:12.567: INFO: stderr: "" Mar 30 22:00:12.567: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Mar 30 22:00:12.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7763' Mar 30 22:00:12.652: INFO: stderr: "" Mar 30 22:00:12.652: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1389 STEP: using delete to clean up resources Mar 30 22:00:12.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7763' Mar 30 22:00:12.766: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 30 22:00:12.766: INFO: stdout: "pod \"pause\" force deleted\n" Mar 30 22:00:12.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7763' Mar 30 22:00:12.924: INFO: stderr: "No resources found in kubectl-7763 namespace.\n" Mar 30 22:00:12.924: INFO: stdout: "" Mar 30 22:00:12.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7763 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 30 22:00:13.017: INFO: stderr: "" Mar 30 22:00:13.017: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:00:13.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7763" for this suite. • [SLOW TEST:8.273 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1379 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":187,"skipped":3170,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:00:13.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1526 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 30 22:00:13.166: INFO: Found 0 stateful pods, waiting for 3 Mar 30 22:00:23.171: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 30 22:00:23.171: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 30 22:00:23.171: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Mar 30 22:00:23.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1526 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 30 22:00:23.442: INFO: stderr: "I0330 22:00:23.313392 2447 log.go:172] (0xc000b194a0) (0xc000b00640) Create stream\nI0330 22:00:23.313446 2447 log.go:172] (0xc000b194a0) (0xc000b00640) Stream added, broadcasting: 1\nI0330 22:00:23.318734 2447 log.go:172] (0xc000b194a0) Reply frame received for 1\nI0330 22:00:23.318767 2447 log.go:172] (0xc000b194a0) (0xc0006145a0) Create stream\nI0330 22:00:23.318777 2447 log.go:172] (0xc000b194a0) (0xc0006145a0) Stream added, broadcasting: 3\nI0330 22:00:23.319924 2447 log.go:172] (0xc000b194a0) Reply frame received for 3\nI0330 22:00:23.319973 2447 log.go:172] (0xc000b194a0) (0xc00043f360) Create stream\nI0330 22:00:23.319989 2447 log.go:172] (0xc000b194a0) (0xc00043f360) Stream added, broadcasting: 5\nI0330 22:00:23.321059 2447 log.go:172] (0xc000b194a0) Reply frame received for 5\nI0330 22:00:23.412109 2447 log.go:172] (0xc000b194a0) Data frame received for 5\nI0330 22:00:23.412139 2447 log.go:172] (0xc00043f360) (5) Data frame handling\nI0330 22:00:23.412158 2447 log.go:172] (0xc00043f360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0330 22:00:23.435815 2447 log.go:172] (0xc000b194a0) Data frame received for 3\nI0330 22:00:23.435846 2447 log.go:172] (0xc0006145a0) (3) Data frame handling\nI0330 22:00:23.435865 2447 log.go:172] (0xc0006145a0) (3) Data frame sent\nI0330 22:00:23.435874 2447 log.go:172] (0xc000b194a0) Data frame received for 3\nI0330 22:00:23.435885 2447 log.go:172] (0xc0006145a0) (3) Data frame handling\nI0330 22:00:23.435974 2447 log.go:172] (0xc000b194a0) Data frame received for 5\nI0330 22:00:23.435990 2447 log.go:172] (0xc00043f360) (5) Data frame handling\nI0330 22:00:23.438041 2447 log.go:172] (0xc000b194a0) Data frame received for 1\nI0330 22:00:23.438069 2447 log.go:172] (0xc000b00640) (1) Data frame handling\nI0330 22:00:23.438097 2447 log.go:172] (0xc000b00640) (1) Data frame sent\nI0330 22:00:23.438127 2447 log.go:172] (0xc000b194a0) (0xc000b00640) Stream removed, broadcasting: 1\nI0330 22:00:23.438152 2447 log.go:172] (0xc000b194a0) Go away received\nI0330 22:00:23.438394 2447 log.go:172] (0xc000b194a0) (0xc000b00640) Stream removed, broadcasting: 1\nI0330 22:00:23.438409 2447 log.go:172] (0xc000b194a0) (0xc0006145a0) Stream removed, broadcasting: 3\nI0330 22:00:23.438418 2447 log.go:172] (0xc000b194a0) (0xc00043f360) Stream removed, broadcasting: 5\n" Mar 30 22:00:23.442: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 30 22:00:23.442: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 30 22:00:33.479: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Mar 30 22:00:43.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1526 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 30 22:00:43.742: INFO: stderr: "I0330 22:00:43.650922 2469 log.go:172] (0xc000513080) (0xc000703ae0) Create stream\nI0330 22:00:43.650971 2469 log.go:172] (0xc000513080) (0xc000703ae0) Stream added, broadcasting: 1\nI0330 22:00:43.654261 2469 log.go:172] (0xc000513080) Reply frame received for 1\nI0330 22:00:43.654329 2469 log.go:172] (0xc000513080) (0xc0009f6000) Create stream\nI0330 22:00:43.654349 2469 log.go:172] (0xc000513080) (0xc0009f6000) Stream added, broadcasting: 3\nI0330 22:00:43.655375 2469 log.go:172] (0xc000513080) Reply frame received for 3\nI0330 22:00:43.655405 2469 log.go:172] (0xc000513080) (0xc000703cc0) Create stream\nI0330 22:00:43.655413 2469 log.go:172] (0xc000513080) (0xc000703cc0) Stream added, broadcasting: 5\nI0330 22:00:43.656437 2469 log.go:172] (0xc000513080) Reply frame received for 5\nI0330 22:00:43.735438 2469 log.go:172] (0xc000513080) Data frame received for 5\nI0330 22:00:43.735495 2469 log.go:172] (0xc000703cc0) (5) Data frame handling\nI0330 22:00:43.735518 2469 log.go:172] (0xc000703cc0) (5) Data frame sent\nI0330 22:00:43.735535 2469 log.go:172] (0xc000513080) Data frame received for 5\nI0330 22:00:43.735544 2469 log.go:172] (0xc000703cc0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0330 22:00:43.735581 2469 log.go:172] (0xc000513080) Data frame received for 3\nI0330 22:00:43.735619 2469 log.go:172] (0xc0009f6000) (3) Data frame handling\nI0330 22:00:43.735654 2469 log.go:172] (0xc0009f6000) (3) Data frame sent\nI0330 22:00:43.735678 2469 log.go:172] (0xc000513080) Data frame received for 3\nI0330 22:00:43.735697 2469 log.go:172] (0xc0009f6000) (3) Data frame handling\nI0330 22:00:43.737520 2469 log.go:172] (0xc000513080) Data frame received for 1\nI0330 22:00:43.737551 2469 log.go:172] (0xc000703ae0) (1) Data frame handling\nI0330 22:00:43.737569 2469 log.go:172] (0xc000703ae0) (1) Data frame sent\nI0330 22:00:43.737590 2469 log.go:172] (0xc000513080) (0xc000703ae0) Stream removed, broadcasting: 1\nI0330 22:00:43.737611 2469 log.go:172] (0xc000513080) Go away received\nI0330 22:00:43.737987 2469 log.go:172] (0xc000513080) (0xc000703ae0) Stream removed, broadcasting: 1\nI0330 22:00:43.738007 2469 log.go:172] (0xc000513080) (0xc0009f6000) Stream removed, broadcasting: 3\nI0330 22:00:43.738016 2469 log.go:172] (0xc000513080) (0xc000703cc0) Stream removed, broadcasting: 5\n" Mar 30 22:00:43.742: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 30 22:00:43.742: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Mar 30 22:00:53.762: INFO: Waiting for StatefulSet statefulset-1526/ss2 to complete update Mar 30 22:00:53.762: INFO: Waiting for Pod statefulset-1526/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 30 22:00:53.762: INFO: Waiting for Pod statefulset-1526/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 30 22:01:03.790: INFO: Waiting for StatefulSet statefulset-1526/ss2 to complete update Mar 30 22:01:03.790: INFO: Waiting for Pod statefulset-1526/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Mar 30 22:01:13.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1526 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Mar 30 22:01:14.023: INFO: stderr: "I0330 22:01:13.898572 2491 log.go:172] (0xc000988630) (0xc0009e2000) Create stream\nI0330 22:01:13.898628 2491 log.go:172] (0xc000988630) (0xc0009e2000) Stream added, broadcasting: 1\nI0330 22:01:13.901056 2491 log.go:172] (0xc000988630) Reply frame received for 1\nI0330 22:01:13.901092 2491 log.go:172] (0xc000988630) (0xc000715b80) Create stream\nI0330 22:01:13.901103 2491 log.go:172] (0xc000988630) (0xc000715b80) Stream added, broadcasting: 3\nI0330 22:01:13.902161 2491 log.go:172] (0xc000988630) Reply frame received for 3\nI0330 22:01:13.902214 2491 log.go:172] (0xc000988630) (0xc0002ca000) Create stream\nI0330 22:01:13.902228 2491 log.go:172] (0xc000988630) (0xc0002ca000) Stream added, broadcasting: 5\nI0330 22:01:13.903071 2491 log.go:172] (0xc000988630) Reply frame received for 5\nI0330 22:01:13.981639 2491 log.go:172] (0xc000988630) Data frame received for 5\nI0330 22:01:13.981664 2491 log.go:172] (0xc0002ca000) (5) Data frame handling\nI0330 22:01:13.981678 2491 log.go:172] (0xc0002ca000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0330 22:01:14.016789 2491 log.go:172] (0xc000988630) Data frame received for 3\nI0330 22:01:14.016832 2491 log.go:172] (0xc000715b80) (3) Data frame handling\nI0330 22:01:14.016854 2491 log.go:172] (0xc000715b80) (3) Data frame sent\nI0330 22:01:14.016872 2491 log.go:172] (0xc000988630) Data frame received for 3\nI0330 22:01:14.016901 2491 log.go:172] (0xc000715b80) (3) Data frame handling\nI0330 22:01:14.017266 2491 log.go:172] (0xc000988630) Data frame received for 5\nI0330 22:01:14.017312 2491 log.go:172] (0xc0002ca000) (5) Data frame handling\nI0330 22:01:14.018528 2491 log.go:172] (0xc000988630) Data frame received for 1\nI0330 22:01:14.018547 2491 log.go:172] (0xc0009e2000) (1) Data frame handling\nI0330 22:01:14.018566 2491 log.go:172] (0xc0009e2000) (1) Data frame sent\nI0330 22:01:14.018580 2491 log.go:172] (0xc000988630) (0xc0009e2000) Stream removed, broadcasting: 1\nI0330 22:01:14.018769 2491 log.go:172] (0xc000988630) Go away received\nI0330 22:01:14.018882 2491 log.go:172] (0xc000988630) (0xc0009e2000) Stream removed, broadcasting: 1\nI0330 22:01:14.018897 2491 log.go:172] (0xc000988630) (0xc000715b80) Stream removed, broadcasting: 3\nI0330 22:01:14.018905 2491 log.go:172] (0xc000988630) (0xc0002ca000) Stream removed, broadcasting: 5\n" Mar 30 22:01:14.023: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Mar 30 22:01:14.023: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Mar 30 22:01:24.059: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Mar 30 22:01:34.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1526 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Mar 30 22:01:34.329: INFO: stderr: "I0330 22:01:34.234078 2512 log.go:172] (0xc0006002c0) (0xc000631ae0) Create stream\nI0330 22:01:34.234144 2512 log.go:172] (0xc0006002c0) (0xc000631ae0) Stream added, broadcasting: 1\nI0330 22:01:34.236664 2512 log.go:172] (0xc0006002c0) Reply frame received for 1\nI0330 22:01:34.236716 2512 log.go:172] (0xc0006002c0) (0xc000a50000) Create stream\nI0330 22:01:34.236731 2512 log.go:172] (0xc0006002c0) (0xc000a50000) Stream added, broadcasting: 3\nI0330 22:01:34.238062 2512 log.go:172] (0xc0006002c0) Reply frame received for 3\nI0330 22:01:34.238109 2512 log.go:172] (0xc0006002c0) (0xc000631cc0) Create stream\nI0330 22:01:34.238132 2512 log.go:172] (0xc0006002c0) (0xc000631cc0) Stream added, broadcasting: 5\nI0330 22:01:34.239178 2512 log.go:172] (0xc0006002c0) Reply frame received for 5\nI0330 22:01:34.321751 2512 log.go:172] (0xc0006002c0) Data frame received for 5\nI0330 22:01:34.321789 2512 log.go:172] (0xc000631cc0) (5) Data frame handling\nI0330 22:01:34.321805 2512 log.go:172] (0xc000631cc0) (5) Data frame sent\nI0330 22:01:34.321820 2512 log.go:172] (0xc0006002c0) Data frame received for 5\nI0330 22:01:34.321848 2512 log.go:172] (0xc000631cc0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0330 22:01:34.321926 2512 log.go:172] (0xc0006002c0) Data frame received for 3\nI0330 22:01:34.321961 2512 log.go:172] (0xc000a50000) (3) Data frame handling\nI0330 22:01:34.321977 2512 log.go:172] (0xc000a50000) (3) Data frame sent\nI0330 22:01:34.321989 2512 log.go:172] (0xc0006002c0) Data frame received for 3\nI0330 22:01:34.322002 2512 log.go:172] (0xc000a50000) (3) Data frame handling\nI0330 22:01:34.323622 2512 log.go:172] (0xc0006002c0) Data frame received for 1\nI0330 22:01:34.323655 2512 log.go:172] (0xc000631ae0) (1) Data frame handling\nI0330 22:01:34.323676 2512 log.go:172] (0xc000631ae0) (1) Data frame sent\nI0330 22:01:34.323701 2512 log.go:172] (0xc0006002c0) (0xc000631ae0) Stream removed, broadcasting: 1\nI0330 22:01:34.323724 2512 log.go:172] (0xc0006002c0) Go away received\nI0330 22:01:34.324143 2512 log.go:172] (0xc0006002c0) (0xc000631ae0) Stream removed, broadcasting: 1\nI0330 22:01:34.324166 2512 log.go:172] (0xc0006002c0) (0xc000a50000) Stream removed, broadcasting: 3\nI0330 22:01:34.324178 2512 log.go:172] (0xc0006002c0) (0xc000631cc0) Stream removed, broadcasting: 5\n" Mar 30 22:01:34.329: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Mar 30 22:01:34.329: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 30 22:01:54.392: INFO: Deleting all statefulset in ns statefulset-1526 Mar 30 22:01:54.394: INFO: Scaling statefulset ss2 to 0 Mar 30 22:02:04.412: INFO: Waiting for statefulset status.replicas updated to 0 Mar 30 22:02:04.415: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:02:04.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1526" for this suite. • [SLOW TEST:111.410 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":188,"skipped":3195,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:02:04.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 30 22:02:04.478: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 30 22:02:04.498: INFO: Waiting for terminating namespaces to be deleted... Mar 30 22:02:04.518: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 30 22:02:04.537: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 22:02:04.537: INFO: Container kindnet-cni ready: true, restart count 0 Mar 30 22:02:04.537: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 22:02:04.537: INFO: Container kube-proxy ready: true, restart count 0 Mar 30 22:02:04.537: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 30 22:02:04.556: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 22:02:04.556: INFO: Container kindnet-cni ready: true, restart count 0 Mar 30 22:02:04.556: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Mar 30 22:02:04.556: INFO: Container kube-bench ready: false, restart count 0 Mar 30 22:02:04.556: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 22:02:04.556: INFO: Container kube-proxy ready: true, restart count 0 Mar 30 22:02:04.556: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Mar 30 22:02:04.556: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d1425d36-c583-4c16-a391-86e0c56b64f0 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-d1425d36-c583-4c16-a391-86e0c56b64f0 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-d1425d36-c583-4c16-a391-86e0c56b64f0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:02:12.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1233" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.335 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":189,"skipped":3213,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:02:12.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Mar 30 22:02:12.908: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Mar 30 22:02:24.237: INFO: >>> kubeConfig: /root/.kube/config Mar 30 22:02:27.189: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:02:36.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9696" for this suite. • [SLOW TEST:23.835 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":190,"skipped":3213,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:02:36.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 30 22:02:36.651: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c36be01-c1b6-42df-a3b8-908e3424bdf4" in namespace "downward-api-8987" to be "success or failure" Mar 30 22:02:36.666: INFO: Pod "downwardapi-volume-4c36be01-c1b6-42df-a3b8-908e3424bdf4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.431062ms Mar 30 22:02:38.671: INFO: Pod "downwardapi-volume-4c36be01-c1b6-42df-a3b8-908e3424bdf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01969452s Mar 30 22:02:40.675: INFO: Pod "downwardapi-volume-4c36be01-c1b6-42df-a3b8-908e3424bdf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023842702s STEP: Saw pod success Mar 30 22:02:40.675: INFO: Pod "downwardapi-volume-4c36be01-c1b6-42df-a3b8-908e3424bdf4" satisfied condition "success or failure" Mar 30 22:02:40.678: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-4c36be01-c1b6-42df-a3b8-908e3424bdf4 container client-container: STEP: delete the pod Mar 30 22:02:40.711: INFO: Waiting for pod downwardapi-volume-4c36be01-c1b6-42df-a3b8-908e3424bdf4 to disappear Mar 30 22:02:40.721: INFO: Pod downwardapi-volume-4c36be01-c1b6-42df-a3b8-908e3424bdf4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:02:40.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8987" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3261,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:02:40.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:02:45.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4756" for this suite. • [SLOW TEST:5.008 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":192,"skipped":3265,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:02:45.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 22:02:45.847: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-4c87840f-3679-46ff-b748-ad1ac3ac0718" in namespace "security-context-test-1080" to be "success or failure" Mar 30 22:02:45.850: INFO: Pod "busybox-readonly-false-4c87840f-3679-46ff-b748-ad1ac3ac0718": Phase="Pending", Reason="", readiness=false. Elapsed: 3.286406ms Mar 30 22:02:47.854: INFO: Pod "busybox-readonly-false-4c87840f-3679-46ff-b748-ad1ac3ac0718": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007413295s Mar 30 22:02:49.860: INFO: Pod "busybox-readonly-false-4c87840f-3679-46ff-b748-ad1ac3ac0718": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013241595s Mar 30 22:02:49.860: INFO: Pod "busybox-readonly-false-4c87840f-3679-46ff-b748-ad1ac3ac0718" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:02:49.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1080" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3288,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:02:49.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 22:02:50.044: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"5870c855-0011-4035-8652-c92fe2a62c5c", Controller:(*bool)(0xc00351f60a), BlockOwnerDeletion:(*bool)(0xc00351f60b)}} Mar 30 22:02:50.073: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"3ef25617-f520-47a0-bbe6-251d03482ceb", Controller:(*bool)(0xc0035b393a), BlockOwnerDeletion:(*bool)(0xc0035b393b)}} Mar 30 22:02:50.130: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"57fcfcce-9807-4adb-ba26-8a97bf5531a3", Controller:(*bool)(0xc0034f2a52), BlockOwnerDeletion:(*bool)(0xc0034f2a53)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:02:55.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4111" for this suite. • [SLOW TEST:5.319 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":194,"skipped":3299,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:02:55.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 30 22:02:55.384: INFO: Waiting up to 5m0s for pod "downward-api-e0231adf-696f-4e8a-994d-399c59d65696" in namespace "downward-api-8564" to be "success or failure" Mar 30 22:02:55.393: INFO: Pod "downward-api-e0231adf-696f-4e8a-994d-399c59d65696": Phase="Pending", Reason="", readiness=false. Elapsed: 8.63982ms Mar 30 22:02:57.397: INFO: Pod "downward-api-e0231adf-696f-4e8a-994d-399c59d65696": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012894745s Mar 30 22:02:59.400: INFO: Pod "downward-api-e0231adf-696f-4e8a-994d-399c59d65696": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016294091s STEP: Saw pod success Mar 30 22:02:59.401: INFO: Pod "downward-api-e0231adf-696f-4e8a-994d-399c59d65696" satisfied condition "success or failure" Mar 30 22:02:59.403: INFO: Trying to get logs from node jerma-worker pod downward-api-e0231adf-696f-4e8a-994d-399c59d65696 container dapi-container: STEP: delete the pod Mar 30 22:02:59.439: INFO: Waiting for pod downward-api-e0231adf-696f-4e8a-994d-399c59d65696 to disappear Mar 30 22:02:59.471: INFO: Pod downward-api-e0231adf-696f-4e8a-994d-399c59d65696 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:02:59.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8564" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3315,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:02:59.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 22:02:59.555: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 5.840542ms) Mar 30 22:02:59.558: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.19601ms) Mar 30 22:02:59.561: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.964871ms) Mar 30 22:02:59.564: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.686246ms) Mar 30 22:02:59.567: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.403135ms) Mar 30 22:02:59.570: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.989046ms) Mar 30 22:02:59.591: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 20.899244ms) Mar 30 22:02:59.594: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.073005ms) Mar 30 22:02:59.598: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.299952ms) Mar 30 22:02:59.600: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.780451ms) Mar 30 22:02:59.603: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.953963ms) Mar 30 22:02:59.607: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.392541ms) Mar 30 22:02:59.610: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.903339ms) Mar 30 22:02:59.614: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.985967ms) Mar 30 22:02:59.617: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.604621ms) Mar 30 22:02:59.621: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.584094ms) Mar 30 22:02:59.624: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.359589ms) Mar 30 22:02:59.628: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.376468ms) Mar 30 22:02:59.631: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.634574ms) Mar 30 22:02:59.635: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.431388ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:02:59.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-1574" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":196,"skipped":3318,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:02:59.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 30 22:02:59.724: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 30 22:02:59.734: INFO: Waiting for terminating namespaces to be deleted... Mar 30 22:02:59.736: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 30 22:02:59.740: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 22:02:59.740: INFO: Container kindnet-cni ready: true, restart count 0 Mar 30 22:02:59.740: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 22:02:59.740: INFO: Container kube-proxy ready: true, restart count 0 Mar 30 22:02:59.740: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 30 22:02:59.745: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 22:02:59.745: INFO: Container kindnet-cni ready: true, restart count 0 Mar 30 22:02:59.745: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Mar 30 22:02:59.745: INFO: Container kube-bench ready: false, restart count 0 Mar 30 22:02:59.745: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 22:02:59.745: INFO: Container kube-proxy ready: true, restart count 0 Mar 30 22:02:59.745: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Mar 30 22:02:59.745: INFO: Container kube-hunter ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Mar 30 22:02:59.869: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Mar 30 22:02:59.869: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Mar 30 22:02:59.869: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Mar 30 22:02:59.869: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Mar 30 22:02:59.869: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Mar 30 22:02:59.876: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-1409fce5-52bf-4208-8ab2-60a899ba4bec.1601340e4ff64ae4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4152/filler-pod-1409fce5-52bf-4208-8ab2-60a899ba4bec to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-1409fce5-52bf-4208-8ab2-60a899ba4bec.1601340ed1b2feb0], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-1409fce5-52bf-4208-8ab2-60a899ba4bec.1601340ef9465dbc], Reason = [Created], Message = [Created container filler-pod-1409fce5-52bf-4208-8ab2-60a899ba4bec] STEP: Considering event: Type = [Normal], Name = [filler-pod-1409fce5-52bf-4208-8ab2-60a899ba4bec.1601340f08dd2356], Reason = [Started], Message = [Started container filler-pod-1409fce5-52bf-4208-8ab2-60a899ba4bec] STEP: Considering event: Type = [Normal], Name = [filler-pod-e04d1043-bc93-4d51-8ecf-b2aa6f125c49.1601340e4fa325de], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4152/filler-pod-e04d1043-bc93-4d51-8ecf-b2aa6f125c49 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-e04d1043-bc93-4d51-8ecf-b2aa6f125c49.1601340e9aff16e8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-e04d1043-bc93-4d51-8ecf-b2aa6f125c49.1601340edfdfedd5], Reason = [Created], Message = [Created container filler-pod-e04d1043-bc93-4d51-8ecf-b2aa6f125c49] STEP: Considering event: Type = [Normal], Name = [filler-pod-e04d1043-bc93-4d51-8ecf-b2aa6f125c49.1601340ef3e72ea5], Reason = [Started], Message = [Started container filler-pod-e04d1043-bc93-4d51-8ecf-b2aa6f125c49] STEP: Considering event: Type = [Warning], Name = [additional-pod.1601340f3f5735d6], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:03:05.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4152" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:5.376 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":197,"skipped":3327,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:03:05.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-495e8a49-faf9-4e90-b96e-a6f57cd8a897 STEP: Creating a pod to test consume secrets Mar 30 22:03:05.110: INFO: Waiting up to 5m0s for pod "pod-secrets-cd50f6b2-147b-430f-bca9-8011f5692102" in namespace "secrets-8659" to be "success or failure" Mar 30 22:03:05.138: INFO: Pod "pod-secrets-cd50f6b2-147b-430f-bca9-8011f5692102": Phase="Pending", Reason="", readiness=false. Elapsed: 28.196524ms Mar 30 22:03:07.153: INFO: Pod "pod-secrets-cd50f6b2-147b-430f-bca9-8011f5692102": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043220251s Mar 30 22:03:09.157: INFO: Pod "pod-secrets-cd50f6b2-147b-430f-bca9-8011f5692102": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04706078s STEP: Saw pod success Mar 30 22:03:09.157: INFO: Pod "pod-secrets-cd50f6b2-147b-430f-bca9-8011f5692102" satisfied condition "success or failure" Mar 30 22:03:09.160: INFO: Trying to get logs from node jerma-worker pod pod-secrets-cd50f6b2-147b-430f-bca9-8011f5692102 container secret-env-test: STEP: delete the pod Mar 30 22:03:09.199: INFO: Waiting for pod pod-secrets-cd50f6b2-147b-430f-bca9-8011f5692102 to disappear Mar 30 22:03:09.237: INFO: Pod pod-secrets-cd50f6b2-147b-430f-bca9-8011f5692102 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:03:09.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8659" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3327,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:03:09.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 22:03:09.329: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-1ebab64d-13e9-4206-90ae-69ce3c66c12e" in namespace "security-context-test-9525" to be "success or failure" Mar 30 22:03:09.333: INFO: Pod "busybox-privileged-false-1ebab64d-13e9-4206-90ae-69ce3c66c12e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.759575ms Mar 30 22:03:11.337: INFO: Pod "busybox-privileged-false-1ebab64d-13e9-4206-90ae-69ce3c66c12e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008198234s Mar 30 22:03:13.342: INFO: Pod "busybox-privileged-false-1ebab64d-13e9-4206-90ae-69ce3c66c12e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012554176s Mar 30 22:03:13.342: INFO: Pod "busybox-privileged-false-1ebab64d-13e9-4206-90ae-69ce3c66c12e" satisfied condition "success or failure" Mar 30 22:03:13.349: INFO: Got logs for pod "busybox-privileged-false-1ebab64d-13e9-4206-90ae-69ce3c66c12e": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:03:13.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9525" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3328,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:03:13.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-521d25a7-86d2-46c7-8b02-a4c8bf47e1fe STEP: Creating the pod STEP: Updating configmap configmap-test-upd-521d25a7-86d2-46c7-8b02-a4c8bf47e1fe STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:04:35.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9711" for this suite. • [SLOW TEST:82.596 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3335,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:04:35.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Mar 30 22:04:36.040: INFO: Waiting up to 5m0s for pod "var-expansion-55051954-dca3-4b44-a385-0ee222d484d8" in namespace "var-expansion-3182" to be "success or failure" Mar 30 22:04:36.057: INFO: Pod "var-expansion-55051954-dca3-4b44-a385-0ee222d484d8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.650952ms Mar 30 22:04:38.077: INFO: Pod "var-expansion-55051954-dca3-4b44-a385-0ee222d484d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037149056s Mar 30 22:04:40.081: INFO: Pod "var-expansion-55051954-dca3-4b44-a385-0ee222d484d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041077303s STEP: Saw pod success Mar 30 22:04:40.081: INFO: Pod "var-expansion-55051954-dca3-4b44-a385-0ee222d484d8" satisfied condition "success or failure" Mar 30 22:04:40.107: INFO: Trying to get logs from node jerma-worker pod var-expansion-55051954-dca3-4b44-a385-0ee222d484d8 container dapi-container: STEP: delete the pod Mar 30 22:04:40.149: INFO: Waiting for pod var-expansion-55051954-dca3-4b44-a385-0ee222d484d8 to disappear Mar 30 22:04:40.163: INFO: Pod var-expansion-55051954-dca3-4b44-a385-0ee222d484d8 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:04:40.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3182" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3342,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:04:40.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Mar 30 22:04:40.247: INFO: Waiting up to 5m0s for pod "pod-3178699e-3941-45e5-bf72-40c8f58ff34b" in namespace "emptydir-4278" to be "success or failure" Mar 30 22:04:40.254: INFO: Pod "pod-3178699e-3941-45e5-bf72-40c8f58ff34b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.465464ms Mar 30 22:04:42.305: INFO: Pod "pod-3178699e-3941-45e5-bf72-40c8f58ff34b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05808641s Mar 30 22:04:44.308: INFO: Pod "pod-3178699e-3941-45e5-bf72-40c8f58ff34b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061345762s STEP: Saw pod success Mar 30 22:04:44.309: INFO: Pod "pod-3178699e-3941-45e5-bf72-40c8f58ff34b" satisfied condition "success or failure" Mar 30 22:04:44.311: INFO: Trying to get logs from node jerma-worker2 pod pod-3178699e-3941-45e5-bf72-40c8f58ff34b container test-container: STEP: delete the pod Mar 30 22:04:44.325: INFO: Waiting for pod pod-3178699e-3941-45e5-bf72-40c8f58ff34b to disappear Mar 30 22:04:44.329: INFO: Pod pod-3178699e-3941-45e5-bf72-40c8f58ff34b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:04:44.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4278" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3351,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:04:44.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 22:04:44.409: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Mar 30 22:04:49.413: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Mar 30 22:04:49.413: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 30 22:04:49.443: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-4950 /apis/apps/v1/namespaces/deployment-4950/deployments/test-cleanup-deployment 392b73b7-44cf-41eb-9113-5573c8a47c39 4067082 1 2020-03-30 22:04:49 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003148c68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Mar 30 22:04:49.470: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-4950 /apis/apps/v1/namespaces/deployment-4950/replicasets/test-cleanup-deployment-55ffc6b7b6 80f408c1-126c-4272-ac8b-0f99d1ec15c5 4067084 1 2020-03-30 22:04:49 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 392b73b7-44cf-41eb-9113-5573c8a47c39 0xc0031046c7 0xc0031046c8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003104738 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 30 22:04:49.470: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Mar 30 22:04:49.470: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-4950 /apis/apps/v1/namespaces/deployment-4950/replicasets/test-cleanup-controller ba1e86df-29b0-4015-b0cb-4e896608256e 4067083 1 2020-03-30 22:04:44 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 392b73b7-44cf-41eb-9113-5573c8a47c39 0xc0031045f7 0xc0031045f8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003104658 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Mar 30 22:04:49.535: INFO: Pod "test-cleanup-controller-x62zv" is available: &Pod{ObjectMeta:{test-cleanup-controller-x62zv test-cleanup-controller- deployment-4950 /api/v1/namespaces/deployment-4950/pods/test-cleanup-controller-x62zv 7cdcefc5-4637-4779-8fbe-3176bf03e9b1 4067070 0 2020-03-30 22:04:44 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller ba1e86df-29b0-4015-b0cb-4e896608256e 0xc0031490b7 0xc0031490b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sppn5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sppn5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sppn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:04:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:04:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:04:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:04:44 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.14,StartTime:2020-03-30 22:04:44 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-30 22:04:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5882ece07712da1ab480513af06aadeda6024693e1253ff99dc271e864986741,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.14,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:04:49.535: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-ns758" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-ns758 test-cleanup-deployment-55ffc6b7b6- deployment-4950 /api/v1/namespaces/deployment-4950/pods/test-cleanup-deployment-55ffc6b7b6-ns758 49041e78-aa6d-424b-83ae-9c4678d038ac 4067090 0 2020-03-30 22:04:49 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 80f408c1-126c-4272-ac8b-0f99d1ec15c5 0xc003149247 0xc003149248}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sppn5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sppn5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sppn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:04:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:04:49.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4950" for this suite. • [SLOW TEST:5.267 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":203,"skipped":3352,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:04:49.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 30 22:04:49.702: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a66a796-9848-41d0-9013-2390e7c55d83" in namespace "downward-api-9446" to be "success or failure" Mar 30 22:04:49.739: INFO: Pod "downwardapi-volume-3a66a796-9848-41d0-9013-2390e7c55d83": Phase="Pending", Reason="", readiness=false. Elapsed: 36.113487ms Mar 30 22:04:51.743: INFO: Pod "downwardapi-volume-3a66a796-9848-41d0-9013-2390e7c55d83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040226403s Mar 30 22:04:53.747: INFO: Pod "downwardapi-volume-3a66a796-9848-41d0-9013-2390e7c55d83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044302438s STEP: Saw pod success Mar 30 22:04:53.747: INFO: Pod "downwardapi-volume-3a66a796-9848-41d0-9013-2390e7c55d83" satisfied condition "success or failure" Mar 30 22:04:53.750: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3a66a796-9848-41d0-9013-2390e7c55d83 container client-container: STEP: delete the pod Mar 30 22:04:53.764: INFO: Waiting for pod downwardapi-volume-3a66a796-9848-41d0-9013-2390e7c55d83 to disappear Mar 30 22:04:53.768: INFO: Pod downwardapi-volume-3a66a796-9848-41d0-9013-2390e7c55d83 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:04:53.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9446" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3353,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:04:53.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:04:53.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-356" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":205,"skipped":3357,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:04:53.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-60264e38-4071-4483-9ac8-c592c125dd24 STEP: Creating a pod to test consume secrets Mar 30 22:04:54.016: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-59cadf95-bcb6-4f3f-8bf9-d4e99c25c21f" in namespace "projected-9244" to be "success or failure" Mar 30 22:04:54.020: INFO: Pod "pod-projected-secrets-59cadf95-bcb6-4f3f-8bf9-d4e99c25c21f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.231112ms Mar 30 22:04:56.024: INFO: Pod "pod-projected-secrets-59cadf95-bcb6-4f3f-8bf9-d4e99c25c21f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008365859s Mar 30 22:04:58.028: INFO: Pod "pod-projected-secrets-59cadf95-bcb6-4f3f-8bf9-d4e99c25c21f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012351895s STEP: Saw pod success Mar 30 22:04:58.028: INFO: Pod "pod-projected-secrets-59cadf95-bcb6-4f3f-8bf9-d4e99c25c21f" satisfied condition "success or failure" Mar 30 22:04:58.031: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-59cadf95-bcb6-4f3f-8bf9-d4e99c25c21f container secret-volume-test: STEP: delete the pod Mar 30 22:04:58.065: INFO: Waiting for pod pod-projected-secrets-59cadf95-bcb6-4f3f-8bf9-d4e99c25c21f to disappear Mar 30 22:04:58.078: INFO: Pod pod-projected-secrets-59cadf95-bcb6-4f3f-8bf9-d4e99c25c21f no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:04:58.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9244" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3361,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:04:58.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4952.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4952.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4952.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4952.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4952.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4952.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4952.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4952.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4952.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4952.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4952.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 1.144.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.144.1_udp@PTR;check="$$(dig +tcp +noall +answer +search 1.144.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.144.1_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4952.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4952.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4952.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4952.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4952.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4952.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4952.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4952.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4952.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4952.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4952.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 1.144.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.144.1_udp@PTR;check="$$(dig +tcp +noall +answer +search 1.144.103.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.103.144.1_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 30 22:05:04.310: INFO: Unable to read wheezy_udp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:04.314: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:04.317: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:04.320: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:04.344: INFO: Unable to read jessie_udp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:04.347: INFO: Unable to read jessie_tcp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:04.350: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:04.353: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:04.369: INFO: Lookups using dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc failed for: [wheezy_udp@dns-test-service.dns-4952.svc.cluster.local wheezy_tcp@dns-test-service.dns-4952.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local jessie_udp@dns-test-service.dns-4952.svc.cluster.local jessie_tcp@dns-test-service.dns-4952.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local] Mar 30 22:05:09.375: INFO: Unable to read wheezy_udp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:09.379: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:09.382: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:09.386: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:09.404: INFO: Unable to read jessie_udp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:09.407: INFO: Unable to read jessie_tcp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:09.410: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:09.412: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:09.428: INFO: Lookups using dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc failed for: [wheezy_udp@dns-test-service.dns-4952.svc.cluster.local wheezy_tcp@dns-test-service.dns-4952.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local jessie_udp@dns-test-service.dns-4952.svc.cluster.local jessie_tcp@dns-test-service.dns-4952.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local] Mar 30 22:05:14.379: INFO: Unable to read wheezy_udp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:14.383: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:14.387: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:14.390: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:14.427: INFO: Unable to read jessie_udp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:14.430: INFO: Unable to read jessie_tcp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:14.433: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:14.436: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:14.452: INFO: Lookups using dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc failed for: [wheezy_udp@dns-test-service.dns-4952.svc.cluster.local wheezy_tcp@dns-test-service.dns-4952.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local jessie_udp@dns-test-service.dns-4952.svc.cluster.local jessie_tcp@dns-test-service.dns-4952.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local] Mar 30 22:05:19.374: INFO: Unable to read wheezy_udp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:19.379: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:19.382: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:19.386: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:19.408: INFO: Unable to read jessie_udp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:19.411: INFO: Unable to read jessie_tcp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:19.414: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:19.417: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:19.433: INFO: Lookups using dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc failed for: [wheezy_udp@dns-test-service.dns-4952.svc.cluster.local wheezy_tcp@dns-test-service.dns-4952.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local jessie_udp@dns-test-service.dns-4952.svc.cluster.local jessie_tcp@dns-test-service.dns-4952.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local] Mar 30 22:05:24.375: INFO: Unable to read wheezy_udp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:24.378: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:24.382: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:24.385: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:24.409: INFO: Unable to read jessie_udp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:24.414: INFO: Unable to read jessie_tcp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:24.417: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:24.420: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:24.438: INFO: Lookups using dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc failed for: [wheezy_udp@dns-test-service.dns-4952.svc.cluster.local wheezy_tcp@dns-test-service.dns-4952.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local jessie_udp@dns-test-service.dns-4952.svc.cluster.local jessie_tcp@dns-test-service.dns-4952.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local] Mar 30 22:05:29.374: INFO: Unable to read wheezy_udp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:29.378: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:29.381: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:29.384: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:29.405: INFO: Unable to read jessie_udp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:29.408: INFO: Unable to read jessie_tcp@dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:29.411: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:29.414: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local from pod dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc: the server could not find the requested resource (get pods dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc) Mar 30 22:05:29.430: INFO: Lookups using dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc failed for: [wheezy_udp@dns-test-service.dns-4952.svc.cluster.local wheezy_tcp@dns-test-service.dns-4952.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local jessie_udp@dns-test-service.dns-4952.svc.cluster.local jessie_tcp@dns-test-service.dns-4952.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4952.svc.cluster.local] Mar 30 22:05:34.425: INFO: DNS probes using dns-4952/dns-test-0048c677-bfe1-48fd-bcab-0b46518843cc succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:05:34.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4952" for this suite. • [SLOW TEST:36.858 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":207,"skipped":3362,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:05:34.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 30 22:05:35.650: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 30 22:05:37.660: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202735, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202735, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202735, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202735, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 30 22:05:40.687: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Mar 30 22:05:40.709: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:05:40.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2539" for this suite. STEP: Destroying namespace "webhook-2539-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.877 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":208,"skipped":3362,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:05:40.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 30 22:05:41.469: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 30 22:05:43.479: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202741, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202741, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202741, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202741, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 30 22:05:46.533: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:05:46.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3995" for this suite. STEP: Destroying namespace "webhook-3995-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.008 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":209,"skipped":3399,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:05:46.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 30 22:05:46.936: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:05:53.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7767" for this suite. • [SLOW TEST:6.732 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":210,"skipped":3441,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:05:53.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 22:05:53.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Mar 30 22:05:53.960: INFO: stderr: "" Mar 30 22:05:53.960: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.3\", GitCommit:\"06ad960bfd03b39c8310aaf92d1e7c12ce618213\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:31:51Z\", GoVersion:\"go1.13.8\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:05:53.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3770" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":211,"skipped":3452,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:05:53.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Mar 30 22:05:58.588: INFO: Successfully updated pod "labelsupdate641133e9-5271-4e30-a1f8-3658b7957317" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:06:00.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7461" for this suite. • [SLOW TEST:6.655 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3465,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:06:00.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 30 22:06:01.165: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 30 22:06:03.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202761, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202761, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202761, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202761, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 30 22:06:06.209: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:06:06.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7423" for this suite. STEP: Destroying namespace "webhook-7423-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.246 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":213,"skipped":3465,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:06:06.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6058 STEP: creating a selector STEP: Creating the service pods in kubernetes Mar 30 22:06:07.002: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Mar 30 22:06:29.136: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.17:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6058 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 22:06:29.136: INFO: >>> kubeConfig: /root/.kube/config I0330 22:06:29.175054 6 log.go:172] (0xc0009acb00) (0xc000c73cc0) Create stream I0330 22:06:29.175089 6 log.go:172] (0xc0009acb00) (0xc000c73cc0) Stream added, broadcasting: 1 I0330 22:06:29.178398 6 log.go:172] (0xc0009acb00) Reply frame received for 1 I0330 22:06:29.178516 6 log.go:172] (0xc0009acb00) (0xc00041a3c0) Create stream I0330 22:06:29.178619 6 log.go:172] (0xc0009acb00) (0xc00041a3c0) Stream added, broadcasting: 3 I0330 22:06:29.182365 6 log.go:172] (0xc0009acb00) Reply frame received for 3 I0330 22:06:29.182464 6 log.go:172] (0xc0009acb00) (0xc0002d4d20) Create stream I0330 22:06:29.182542 6 log.go:172] (0xc0009acb00) (0xc0002d4d20) Stream added, broadcasting: 5 I0330 22:06:29.184691 6 log.go:172] (0xc0009acb00) Reply frame received for 5 I0330 22:06:29.267758 6 log.go:172] (0xc0009acb00) Data frame received for 3 I0330 22:06:29.267812 6 log.go:172] (0xc00041a3c0) (3) Data frame handling I0330 22:06:29.267830 6 log.go:172] (0xc00041a3c0) (3) Data frame sent I0330 22:06:29.267847 6 log.go:172] (0xc0009acb00) Data frame received for 3 I0330 22:06:29.267862 6 log.go:172] (0xc00041a3c0) (3) Data frame handling I0330 22:06:29.267904 6 log.go:172] (0xc0009acb00) Data frame received for 5 I0330 22:06:29.267939 6 log.go:172] (0xc0002d4d20) (5) Data frame handling I0330 22:06:29.270003 6 log.go:172] (0xc0009acb00) Data frame received for 1 I0330 22:06:29.270022 6 log.go:172] (0xc000c73cc0) (1) Data frame handling I0330 22:06:29.270042 6 log.go:172] (0xc000c73cc0) (1) Data frame sent I0330 22:06:29.270064 6 log.go:172] (0xc0009acb00) (0xc000c73cc0) Stream removed, broadcasting: 1 I0330 22:06:29.270177 6 log.go:172] (0xc0009acb00) (0xc000c73cc0) Stream removed, broadcasting: 1 I0330 22:06:29.270208 6 log.go:172] (0xc0009acb00) (0xc00041a3c0) Stream removed, broadcasting: 3 I0330 22:06:29.270225 6 log.go:172] (0xc0009acb00) (0xc0002d4d20) Stream removed, broadcasting: 5 Mar 30 22:06:29.270: INFO: Found all expected endpoints: [netserver-0] I0330 22:06:29.270372 6 log.go:172] (0xc0009acb00) Go away received Mar 30 22:06:29.274: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.68:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6058 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 22:06:29.274: INFO: >>> kubeConfig: /root/.kube/config I0330 22:06:29.306709 6 log.go:172] (0xc0026ec160) (0xc0027e03c0) Create stream I0330 22:06:29.306770 6 log.go:172] (0xc0026ec160) (0xc0027e03c0) Stream added, broadcasting: 1 I0330 22:06:29.308701 6 log.go:172] (0xc0026ec160) Reply frame received for 1 I0330 22:06:29.308728 6 log.go:172] (0xc0026ec160) (0xc0024f0000) Create stream I0330 22:06:29.308739 6 log.go:172] (0xc0026ec160) (0xc0024f0000) Stream added, broadcasting: 3 I0330 22:06:29.309939 6 log.go:172] (0xc0026ec160) Reply frame received for 3 I0330 22:06:29.310002 6 log.go:172] (0xc0026ec160) (0xc000d3a140) Create stream I0330 22:06:29.310021 6 log.go:172] (0xc0026ec160) (0xc000d3a140) Stream added, broadcasting: 5 I0330 22:06:29.311106 6 log.go:172] (0xc0026ec160) Reply frame received for 5 I0330 22:06:29.381564 6 log.go:172] (0xc0026ec160) Data frame received for 5 I0330 22:06:29.381601 6 log.go:172] (0xc000d3a140) (5) Data frame handling I0330 22:06:29.381634 6 log.go:172] (0xc0026ec160) Data frame received for 3 I0330 22:06:29.381649 6 log.go:172] (0xc0024f0000) (3) Data frame handling I0330 22:06:29.381684 6 log.go:172] (0xc0024f0000) (3) Data frame sent I0330 22:06:29.381705 6 log.go:172] (0xc0026ec160) Data frame received for 3 I0330 22:06:29.381724 6 log.go:172] (0xc0024f0000) (3) Data frame handling I0330 22:06:29.383537 6 log.go:172] (0xc0026ec160) Data frame received for 1 I0330 22:06:29.383555 6 log.go:172] (0xc0027e03c0) (1) Data frame handling I0330 22:06:29.383562 6 log.go:172] (0xc0027e03c0) (1) Data frame sent I0330 22:06:29.383569 6 log.go:172] (0xc0026ec160) (0xc0027e03c0) Stream removed, broadcasting: 1 I0330 22:06:29.383628 6 log.go:172] (0xc0026ec160) (0xc0027e03c0) Stream removed, broadcasting: 1 I0330 22:06:29.383644 6 log.go:172] (0xc0026ec160) (0xc0024f0000) Stream removed, broadcasting: 3 I0330 22:06:29.383667 6 log.go:172] (0xc0026ec160) Go away received I0330 22:06:29.383711 6 log.go:172] (0xc0026ec160) (0xc000d3a140) Stream removed, broadcasting: 5 Mar 30 22:06:29.383: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:06:29.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6058" for this suite. • [SLOW TEST:22.522 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3530,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:06:29.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Mar 30 22:06:29.430: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Mar 30 22:06:29.530: INFO: Waiting for terminating namespaces to be deleted... Mar 30 22:06:29.538: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Mar 30 22:06:29.550: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 22:06:29.550: INFO: Container kindnet-cni ready: true, restart count 0 Mar 30 22:06:29.550: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 22:06:29.551: INFO: Container kube-proxy ready: true, restart count 0 Mar 30 22:06:29.551: INFO: netserver-0 from pod-network-test-6058 started at 2020-03-30 22:06:07 +0000 UTC (1 container statuses recorded) Mar 30 22:06:29.551: INFO: Container webserver ready: true, restart count 0 Mar 30 22:06:29.551: INFO: host-test-container-pod from pod-network-test-6058 started at 2020-03-30 22:06:25 +0000 UTC (1 container statuses recorded) Mar 30 22:06:29.551: INFO: Container agnhost ready: true, restart count 0 Mar 30 22:06:29.551: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Mar 30 22:06:29.564: INFO: netserver-1 from pod-network-test-6058 started at 2020-03-30 22:06:07 +0000 UTC (1 container statuses recorded) Mar 30 22:06:29.564: INFO: Container webserver ready: true, restart count 0 Mar 30 22:06:29.564: INFO: test-container-pod from pod-network-test-6058 started at 2020-03-30 22:06:25 +0000 UTC (1 container statuses recorded) Mar 30 22:06:29.564: INFO: Container webserver ready: true, restart count 0 Mar 30 22:06:29.564: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 22:06:29.564: INFO: Container kube-proxy ready: true, restart count 0 Mar 30 22:06:29.564: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Mar 30 22:06:29.564: INFO: Container kube-hunter ready: false, restart count 0 Mar 30 22:06:29.564: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Mar 30 22:06:29.564: INFO: Container kube-bench ready: false, restart count 0 Mar 30 22:06:29.564: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Mar 30 22:06:29.564: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-35305348-dbdc-4255-b488-23f1385a44d6 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-35305348-dbdc-4255-b488-23f1385a44d6 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-35305348-dbdc-4255-b488-23f1385a44d6 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:06:45.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1474" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.381 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":215,"skipped":3538,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:06:45.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Mar 30 22:06:45.851: INFO: Waiting up to 5m0s for pod "pod-54a2d8c5-5ebc-485d-bd69-e492eddba556" in namespace "emptydir-3016" to be "success or failure" Mar 30 22:06:45.860: INFO: Pod "pod-54a2d8c5-5ebc-485d-bd69-e492eddba556": Phase="Pending", Reason="", readiness=false. Elapsed: 9.695414ms Mar 30 22:06:47.895: INFO: Pod "pod-54a2d8c5-5ebc-485d-bd69-e492eddba556": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044233604s Mar 30 22:06:49.899: INFO: Pod "pod-54a2d8c5-5ebc-485d-bd69-e492eddba556": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048387391s STEP: Saw pod success Mar 30 22:06:49.899: INFO: Pod "pod-54a2d8c5-5ebc-485d-bd69-e492eddba556" satisfied condition "success or failure" Mar 30 22:06:49.901: INFO: Trying to get logs from node jerma-worker pod pod-54a2d8c5-5ebc-485d-bd69-e492eddba556 container test-container: STEP: delete the pod Mar 30 22:06:49.954: INFO: Waiting for pod pod-54a2d8c5-5ebc-485d-bd69-e492eddba556 to disappear Mar 30 22:06:49.980: INFO: Pod pod-54a2d8c5-5ebc-485d-bd69-e492eddba556 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:06:49.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3016" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3539,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:06:49.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 30 22:06:50.672: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 30 22:06:52.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202810, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202810, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202810, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202810, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 30 22:06:55.711: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 22:06:55.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4159-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:06:56.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7296" for this suite. STEP: Destroying namespace "webhook-7296-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.941 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":217,"skipped":3539,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:06:56.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-4687ff7d-372b-4892-83af-628f5647b932 STEP: Creating a pod to test consume secrets Mar 30 22:06:56.986: INFO: Waiting up to 5m0s for pod "pod-secrets-471a9668-135b-4919-b756-459709b12e96" in namespace "secrets-6800" to be "success or failure" Mar 30 22:06:57.000: INFO: Pod "pod-secrets-471a9668-135b-4919-b756-459709b12e96": Phase="Pending", Reason="", readiness=false. Elapsed: 13.494983ms Mar 30 22:06:59.012: INFO: Pod "pod-secrets-471a9668-135b-4919-b756-459709b12e96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025864651s Mar 30 22:07:01.017: INFO: Pod "pod-secrets-471a9668-135b-4919-b756-459709b12e96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030084652s STEP: Saw pod success Mar 30 22:07:01.017: INFO: Pod "pod-secrets-471a9668-135b-4919-b756-459709b12e96" satisfied condition "success or failure" Mar 30 22:07:01.020: INFO: Trying to get logs from node jerma-worker pod pod-secrets-471a9668-135b-4919-b756-459709b12e96 container secret-volume-test: STEP: delete the pod Mar 30 22:07:01.062: INFO: Waiting for pod pod-secrets-471a9668-135b-4919-b756-459709b12e96 to disappear Mar 30 22:07:01.090: INFO: Pod pod-secrets-471a9668-135b-4919-b756-459709b12e96 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:07:01.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6800" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3542,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:07:01.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 30 22:07:01.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9272' Mar 30 22:07:01.457: INFO: stderr: "" Mar 30 22:07:01.457: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 30 22:07:01.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9272' Mar 30 22:07:01.610: INFO: stderr: "" Mar 30 22:07:01.610: INFO: stdout: "update-demo-nautilus-gg2ww update-demo-nautilus-hhjgh " Mar 30 22:07:01.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gg2ww -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9272' Mar 30 22:07:01.698: INFO: stderr: "" Mar 30 22:07:01.698: INFO: stdout: "" Mar 30 22:07:01.698: INFO: update-demo-nautilus-gg2ww is created but not running Mar 30 22:07:06.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9272' Mar 30 22:07:06.807: INFO: stderr: "" Mar 30 22:07:06.807: INFO: stdout: "update-demo-nautilus-gg2ww update-demo-nautilus-hhjgh " Mar 30 22:07:06.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gg2ww -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9272' Mar 30 22:07:06.899: INFO: stderr: "" Mar 30 22:07:06.899: INFO: stdout: "true" Mar 30 22:07:06.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gg2ww -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9272' Mar 30 22:07:07.000: INFO: stderr: "" Mar 30 22:07:07.000: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 30 22:07:07.000: INFO: validating pod update-demo-nautilus-gg2ww Mar 30 22:07:07.004: INFO: got data: { "image": "nautilus.jpg" } Mar 30 22:07:07.004: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 30 22:07:07.005: INFO: update-demo-nautilus-gg2ww is verified up and running Mar 30 22:07:07.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hhjgh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9272' Mar 30 22:07:07.093: INFO: stderr: "" Mar 30 22:07:07.093: INFO: stdout: "true" Mar 30 22:07:07.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hhjgh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9272' Mar 30 22:07:07.194: INFO: stderr: "" Mar 30 22:07:07.194: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 30 22:07:07.194: INFO: validating pod update-demo-nautilus-hhjgh Mar 30 22:07:07.198: INFO: got data: { "image": "nautilus.jpg" } Mar 30 22:07:07.198: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 30 22:07:07.198: INFO: update-demo-nautilus-hhjgh is verified up and running STEP: using delete to clean up resources Mar 30 22:07:07.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9272' Mar 30 22:07:07.292: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 30 22:07:07.292: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 30 22:07:07.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9272' Mar 30 22:07:07.387: INFO: stderr: "No resources found in kubectl-9272 namespace.\n" Mar 30 22:07:07.387: INFO: stdout: "" Mar 30 22:07:07.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9272 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 30 22:07:07.476: INFO: stderr: "" Mar 30 22:07:07.476: INFO: stdout: "update-demo-nautilus-gg2ww\nupdate-demo-nautilus-hhjgh\n" Mar 30 22:07:07.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9272' Mar 30 22:07:08.073: INFO: stderr: "No resources found in kubectl-9272 namespace.\n" Mar 30 22:07:08.073: INFO: stdout: "" Mar 30 22:07:08.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9272 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 30 22:07:08.177: INFO: stderr: "" Mar 30 22:07:08.177: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:07:08.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9272" for this suite. • [SLOW TEST:7.087 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":219,"skipped":3548,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:07:08.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:07:08.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-8595" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":220,"skipped":3579,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:07:08.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-ac3c43f6-1754-447a-9553-a53c5f7b95b7 Mar 30 22:07:08.641: INFO: Pod name my-hostname-basic-ac3c43f6-1754-447a-9553-a53c5f7b95b7: Found 0 pods out of 1 Mar 30 22:07:13.666: INFO: Pod name my-hostname-basic-ac3c43f6-1754-447a-9553-a53c5f7b95b7: Found 1 pods out of 1 Mar 30 22:07:13.666: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-ac3c43f6-1754-447a-9553-a53c5f7b95b7" are running Mar 30 22:07:13.671: INFO: Pod "my-hostname-basic-ac3c43f6-1754-447a-9553-a53c5f7b95b7-w544q" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-30 22:07:08 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-30 22:07:11 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-30 22:07:11 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-03-30 22:07:08 +0000 UTC Reason: Message:}]) Mar 30 22:07:13.671: INFO: Trying to dial the pod Mar 30 22:07:18.684: INFO: Controller my-hostname-basic-ac3c43f6-1754-447a-9553-a53c5f7b95b7: Got expected result from replica 1 [my-hostname-basic-ac3c43f6-1754-447a-9553-a53c5f7b95b7-w544q]: "my-hostname-basic-ac3c43f6-1754-447a-9553-a53c5f7b95b7-w544q", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:07:18.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2041" for this suite. • [SLOW TEST:10.108 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":221,"skipped":3592,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:07:18.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:07:31.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7376" for this suite. • [SLOW TEST:13.257 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":222,"skipped":3595,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:07:31.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:07:48.057: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3395" for this suite. • [SLOW TEST:16.116 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":223,"skipped":3612,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:07:48.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Mar 30 22:07:48.195: INFO: Waiting up to 5m0s for pod "pod-61c07fee-3e1c-4f1e-a6d9-eaf0fdde399b" in namespace "emptydir-5997" to be "success or failure" Mar 30 22:07:48.217: INFO: Pod "pod-61c07fee-3e1c-4f1e-a6d9-eaf0fdde399b": Phase="Pending", Reason="", readiness=false. Elapsed: 21.32762ms Mar 30 22:07:50.222: INFO: Pod "pod-61c07fee-3e1c-4f1e-a6d9-eaf0fdde399b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026336376s Mar 30 22:07:52.226: INFO: Pod "pod-61c07fee-3e1c-4f1e-a6d9-eaf0fdde399b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030431222s STEP: Saw pod success Mar 30 22:07:52.226: INFO: Pod "pod-61c07fee-3e1c-4f1e-a6d9-eaf0fdde399b" satisfied condition "success or failure" Mar 30 22:07:52.228: INFO: Trying to get logs from node jerma-worker2 pod pod-61c07fee-3e1c-4f1e-a6d9-eaf0fdde399b container test-container: STEP: delete the pod Mar 30 22:07:52.268: INFO: Waiting for pod pod-61c07fee-3e1c-4f1e-a6d9-eaf0fdde399b to disappear Mar 30 22:07:52.283: INFO: Pod pod-61c07fee-3e1c-4f1e-a6d9-eaf0fdde399b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:07:52.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5997" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3642,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:07:52.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 30 22:07:52.356: INFO: Waiting up to 5m0s for pod "pod-b8396c54-1f5a-47a3-b749-cb480314f2b3" in namespace "emptydir-3246" to be "success or failure" Mar 30 22:07:52.359: INFO: Pod "pod-b8396c54-1f5a-47a3-b749-cb480314f2b3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.565416ms Mar 30 22:07:54.363: INFO: Pod "pod-b8396c54-1f5a-47a3-b749-cb480314f2b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007085084s Mar 30 22:07:56.367: INFO: Pod "pod-b8396c54-1f5a-47a3-b749-cb480314f2b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011136383s STEP: Saw pod success Mar 30 22:07:56.367: INFO: Pod "pod-b8396c54-1f5a-47a3-b749-cb480314f2b3" satisfied condition "success or failure" Mar 30 22:07:56.370: INFO: Trying to get logs from node jerma-worker2 pod pod-b8396c54-1f5a-47a3-b749-cb480314f2b3 container test-container: STEP: delete the pod Mar 30 22:07:56.411: INFO: Waiting for pod pod-b8396c54-1f5a-47a3-b749-cb480314f2b3 to disappear Mar 30 22:07:56.445: INFO: Pod pod-b8396c54-1f5a-47a3-b749-cb480314f2b3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:07:56.445: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3246" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3642,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:07:56.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Mar 30 22:07:56.512: INFO: Waiting up to 5m0s for pod "downward-api-1e5ae6e6-918e-4169-839c-8e822890f426" in namespace "downward-api-8525" to be "success or failure" Mar 30 22:07:56.515: INFO: Pod "downward-api-1e5ae6e6-918e-4169-839c-8e822890f426": Phase="Pending", Reason="", readiness=false. Elapsed: 3.366074ms Mar 30 22:07:58.518: INFO: Pod "downward-api-1e5ae6e6-918e-4169-839c-8e822890f426": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006648955s Mar 30 22:08:00.522: INFO: Pod "downward-api-1e5ae6e6-918e-4169-839c-8e822890f426": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010697097s STEP: Saw pod success Mar 30 22:08:00.522: INFO: Pod "downward-api-1e5ae6e6-918e-4169-839c-8e822890f426" satisfied condition "success or failure" Mar 30 22:08:00.525: INFO: Trying to get logs from node jerma-worker2 pod downward-api-1e5ae6e6-918e-4169-839c-8e822890f426 container dapi-container: STEP: delete the pod Mar 30 22:08:00.547: INFO: Waiting for pod downward-api-1e5ae6e6-918e-4169-839c-8e822890f426 to disappear Mar 30 22:08:00.552: INFO: Pod downward-api-1e5ae6e6-918e-4169-839c-8e822890f426 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:08:00.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8525" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3662,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:08:00.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5883 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-5883 Mar 30 22:08:00.665: INFO: Found 0 stateful pods, waiting for 1 Mar 30 22:08:10.669: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 30 22:08:10.690: INFO: Deleting all statefulset in ns statefulset-5883 Mar 30 22:08:10.715: INFO: Scaling statefulset ss to 0 Mar 30 22:08:30.785: INFO: Waiting for statefulset status.replicas updated to 0 Mar 30 22:08:30.788: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:08:30.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5883" for this suite. • [SLOW TEST:30.249 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":227,"skipped":3683,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:08:30.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:08:34.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5618" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3699,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:08:34.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Mar 30 22:08:39.536: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8784 pod-service-account-09c26726-03c8-4c56-a932-5731813a1be5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Mar 30 22:08:39.774: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8784 pod-service-account-09c26726-03c8-4c56-a932-5731813a1be5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Mar 30 22:08:39.968: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8784 pod-service-account-09c26726-03c8-4c56-a932-5731813a1be5 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:08:40.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8784" for this suite. • [SLOW TEST:5.264 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":229,"skipped":3730,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:08:40.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Mar 30 22:08:40.225: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7046 /api/v1/namespaces/watch-7046/configmaps/e2e-watch-test-configmap-a 7ada74af-8a02-4f4d-938c-2a82e183638f 4068799 0 2020-03-30 22:08:40 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 30 22:08:40.225: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7046 /api/v1/namespaces/watch-7046/configmaps/e2e-watch-test-configmap-a 7ada74af-8a02-4f4d-938c-2a82e183638f 4068799 0 2020-03-30 22:08:40 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Mar 30 22:08:50.233: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7046 /api/v1/namespaces/watch-7046/configmaps/e2e-watch-test-configmap-a 7ada74af-8a02-4f4d-938c-2a82e183638f 4068841 0 2020-03-30 22:08:40 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Mar 30 22:08:50.234: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7046 /api/v1/namespaces/watch-7046/configmaps/e2e-watch-test-configmap-a 7ada74af-8a02-4f4d-938c-2a82e183638f 4068841 0 2020-03-30 22:08:40 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Mar 30 22:09:00.260: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7046 /api/v1/namespaces/watch-7046/configmaps/e2e-watch-test-configmap-a 7ada74af-8a02-4f4d-938c-2a82e183638f 4068871 0 2020-03-30 22:08:40 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 30 22:09:00.260: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7046 /api/v1/namespaces/watch-7046/configmaps/e2e-watch-test-configmap-a 7ada74af-8a02-4f4d-938c-2a82e183638f 4068871 0 2020-03-30 22:08:40 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Mar 30 22:09:10.413: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7046 /api/v1/namespaces/watch-7046/configmaps/e2e-watch-test-configmap-a 7ada74af-8a02-4f4d-938c-2a82e183638f 4068901 0 2020-03-30 22:08:40 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Mar 30 22:09:10.413: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7046 /api/v1/namespaces/watch-7046/configmaps/e2e-watch-test-configmap-a 7ada74af-8a02-4f4d-938c-2a82e183638f 4068901 0 2020-03-30 22:08:40 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Mar 30 22:09:20.421: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7046 /api/v1/namespaces/watch-7046/configmaps/e2e-watch-test-configmap-b fd3cc165-95dd-403a-9dba-431e1d72c346 4068936 0 2020-03-30 22:09:20 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 30 22:09:20.421: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7046 /api/v1/namespaces/watch-7046/configmaps/e2e-watch-test-configmap-b fd3cc165-95dd-403a-9dba-431e1d72c346 4068936 0 2020-03-30 22:09:20 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Mar 30 22:09:30.427: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7046 /api/v1/namespaces/watch-7046/configmaps/e2e-watch-test-configmap-b fd3cc165-95dd-403a-9dba-431e1d72c346 4068966 0 2020-03-30 22:09:20 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Mar 30 22:09:30.427: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7046 /api/v1/namespaces/watch-7046/configmaps/e2e-watch-test-configmap-b fd3cc165-95dd-403a-9dba-431e1d72c346 4068966 0 2020-03-30 22:09:20 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:09:40.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7046" for this suite. • [SLOW TEST:60.262 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":230,"skipped":3730,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:09:40.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 30 22:09:40.770: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 30 22:09:42.780: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202980, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202980, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202980, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721202980, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 30 22:09:45.806: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:09:46.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2557" for this suite. STEP: Destroying namespace "webhook-2557-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.902 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":231,"skipped":3738,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:09:46.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Mar 30 22:09:46.431: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1087" to be "success or failure" Mar 30 22:09:46.434: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 3.290134ms Mar 30 22:09:48.439: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007451021s Mar 30 22:09:50.506: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075252305s Mar 30 22:09:52.510: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.07881489s STEP: Saw pod success Mar 30 22:09:52.510: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Mar 30 22:09:52.513: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Mar 30 22:09:52.574: INFO: Waiting for pod pod-host-path-test to disappear Mar 30 22:09:52.578: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:09:52.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1087" for this suite. • [SLOW TEST:6.245 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3745,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:09:52.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Mar 30 22:09:52.652: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:09:58.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6495" for this suite. • [SLOW TEST:5.525 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":233,"skipped":3747,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:09:58.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0330 22:10:08.203482 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 30 22:10:08.203: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:10:08.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6174" for this suite. • [SLOW TEST:10.100 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":234,"skipped":3780,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:10:08.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 22:10:08.239: INFO: Creating deployment "webserver-deployment" Mar 30 22:10:08.259: INFO: Waiting for observed generation 1 Mar 30 22:10:10.277: INFO: Waiting for all required pods to come up Mar 30 22:10:10.282: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Mar 30 22:10:18.292: INFO: Waiting for deployment "webserver-deployment" to complete Mar 30 22:10:18.298: INFO: Updating deployment "webserver-deployment" with a non-existent image Mar 30 22:10:18.303: INFO: Updating deployment webserver-deployment Mar 30 22:10:18.303: INFO: Waiting for observed generation 2 Mar 30 22:10:20.454: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Mar 30 22:10:20.458: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Mar 30 22:10:20.461: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 30 22:10:20.468: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Mar 30 22:10:20.468: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Mar 30 22:10:20.470: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Mar 30 22:10:20.474: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Mar 30 22:10:20.475: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Mar 30 22:10:20.480: INFO: Updating deployment webserver-deployment Mar 30 22:10:20.480: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Mar 30 22:10:20.611: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Mar 30 22:10:20.621: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Mar 30 22:10:20.809: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6555 /apis/apps/v1/namespaces/deployment-6555/deployments/webserver-deployment 2d9ca7e5-7769-4dd2-96c7-8ed8c986c9e3 4069486 3 2020-03-30 22:10:08 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003091db8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-03-30 22:10:18 +0000 UTC,LastTransitionTime:2020-03-30 22:10:08 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-03-30 22:10:20 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Mar 30 22:10:20.945: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-6555 /apis/apps/v1/namespaces/deployment-6555/replicasets/webserver-deployment-c7997dcc8 736e218e-584a-4025-bd13-32b6be32d324 4069476 3 2020-03-30 22:10:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 2d9ca7e5-7769-4dd2-96c7-8ed8c986c9e3 0xc003000287 0xc003000288}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0030002f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Mar 30 22:10:20.945: INFO: All old ReplicaSets of Deployment "webserver-deployment": Mar 30 22:10:20.945: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-6555 /apis/apps/v1/namespaces/deployment-6555/replicasets/webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 4069515 3 2020-03-30 22:10:08 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 2d9ca7e5-7769-4dd2-96c7-8ed8c986c9e3 0xc0030001c7 0xc0030001c8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003000228 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Mar 30 22:10:21.111: INFO: Pod "webserver-deployment-595b5b9587-27rff" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-27rff webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-27rff d8fa9cd0-d1aa-4e22-a053-f5d5d7704f94 4069517 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364a5e0 0xc00364a5e1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.111: INFO: Pod "webserver-deployment-595b5b9587-5qb74" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5qb74 webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-5qb74 f9be0e26-85a6-44e1-8d73-180ca6c8caaf 4069500 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364a6f7 0xc00364a6f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.112: INFO: Pod "webserver-deployment-595b5b9587-6x46f" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6x46f webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-6x46f fad9e976-5490-4668-85d1-488d4ab004fd 4069526 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364a817 0xc00364a818}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-30 22:10:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.112: INFO: Pod "webserver-deployment-595b5b9587-8wcjd" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8wcjd webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-8wcjd 36a7f092-397d-4bcc-88c1-7cd28ae07aa1 4069395 0 2020-03-30 22:10:08 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364a977 0xc00364a978}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.85,StartTime:2020-03-30 22:10:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-30 22:10:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cb48c75c62f9437784f483271e6a5cef493aa2d4e8b8773d25af690876eeb8e5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.85,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.112: INFO: Pod "webserver-deployment-595b5b9587-99kfq" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-99kfq webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-99kfq 43977b12-4bee-4cf5-8e31-2af60be50ae7 4069354 0 2020-03-30 22:10:08 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364aaf7 0xc00364aaf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.82,StartTime:2020-03-30 22:10:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-30 22:10:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0db01b3405bd232fd88e680dd1f3233704d1b026dc5805689b3b27e0b142e19e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.82,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.112: INFO: Pod "webserver-deployment-595b5b9587-bdltd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bdltd webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-bdltd a049be13-f474-4cd7-9da1-9f593e892e80 4069519 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364ac77 0xc00364ac78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.113: INFO: Pod "webserver-deployment-595b5b9587-c24qb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c24qb webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-c24qb 761efb3c-69e1-44e4-9ea8-07b452fa331e 4069394 0 2020-03-30 22:10:08 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364ad97 0xc00364ad98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.29,StartTime:2020-03-30 22:10:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-30 22:10:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://834d44fbdad4656bf3f3b4c62065be7cbbb211bdc6936105a48f0c58f6cf2784,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.29,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.113: INFO: Pod "webserver-deployment-595b5b9587-cfzf6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cfzf6 webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-cfzf6 04fa787e-572c-4033-95f2-af75f5b222e1 4069494 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364af17 0xc00364af18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.113: INFO: Pod "webserver-deployment-595b5b9587-cr5n4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cr5n4 webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-cr5n4 a66917f4-4d46-4b8c-919c-a59d5c331b97 4069524 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364b037 0xc00364b038}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.113: INFO: Pod "webserver-deployment-595b5b9587-dthw8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dthw8 webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-dthw8 29e9f432-240e-4945-a2e6-9e4ec6103b02 4069387 0 2020-03-30 22:10:08 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364b157 0xc00364b158}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.83,StartTime:2020-03-30 22:10:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-30 22:10:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://6a38ccc908437e69a230275deee078788e17057ae4177421583711c73d9d3e9b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.83,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.113: INFO: Pod "webserver-deployment-595b5b9587-fc6dh" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fc6dh webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-fc6dh e6f49b47-fa0e-479c-8e93-fb0a75a1fa85 4069357 0 2020-03-30 22:10:08 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364b2d7 0xc00364b2d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.28,StartTime:2020-03-30 22:10:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-30 22:10:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://42e6a34b7980806df82233025a8dca126f61abb848a2d123b8e55b7df908e5da,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.28,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.114: INFO: Pod "webserver-deployment-595b5b9587-jrcsj" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jrcsj webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-jrcsj d1bd43ec-edf5-498a-9b16-83b6ddf43e39 4069335 0 2020-03-30 22:10:08 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364b457 0xc00364b458}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.27,StartTime:2020-03-30 22:10:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-30 22:10:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://dbe25f13af217a4596d5b28c518b528b4939e0ee96b6fd8c4e96d7dacde7572d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.114: INFO: Pod "webserver-deployment-595b5b9587-k6cpg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-k6cpg webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-k6cpg e4613df1-da01-4497-917c-48b48655a9b2 4069499 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364b5d7 0xc00364b5d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.114: INFO: Pod "webserver-deployment-595b5b9587-km8t4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-km8t4 webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-km8t4 bb2976ba-0941-48f5-a4fe-b5235f64286b 4069521 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364b6f7 0xc00364b6f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.114: INFO: Pod "webserver-deployment-595b5b9587-m9vw8" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-m9vw8 webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-m9vw8 a5b0fb7f-ae91-4500-bc44-22468434840e 4069496 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364b817 0xc00364b818}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.114: INFO: Pod "webserver-deployment-595b5b9587-mnx6q" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mnx6q webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-mnx6q 36c6b3f0-2d15-4d46-bd2d-9a890237c036 4069399 0 2020-03-30 22:10:08 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364b937 0xc00364b938}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.30,StartTime:2020-03-30 22:10:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-30 22:10:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8ff526e033db1ac12475e31de9de9054961294e70396e4707264807f053aa4eb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.30,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.115: INFO: Pod "webserver-deployment-595b5b9587-qjpfv" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qjpfv webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-qjpfv 8a5525c5-498e-4afb-ad9c-f52ffa3c8e1c 4069388 0 2020-03-30 22:10:08 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364bab7 0xc00364bab8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:08 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.31,StartTime:2020-03-30 22:10:08 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-03-30 22:10:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://581795df8d68ac5f93ed282d21b5df4ebefd836f11590459ab52aeb3b2e6d7b9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.31,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.115: INFO: Pod "webserver-deployment-595b5b9587-t65dk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-t65dk webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-t65dk da461564-1a57-4a47-819f-d45840d80ff5 4069487 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364bc37 0xc00364bc38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.115: INFO: Pod "webserver-deployment-595b5b9587-vjt5h" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vjt5h webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-vjt5h 3e41b26a-edb1-4b8f-b0d8-5de04728cc48 4069507 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364bd57 0xc00364bd58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.115: INFO: Pod "webserver-deployment-595b5b9587-z64g4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-z64g4 webserver-deployment-595b5b9587- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-595b5b9587-z64g4 b302db7a-d360-49b8-a9e6-52bbb67e3cb7 4069523 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 a576d6d8-0919-48d1-b9b1-f4b4600d049e 0xc00364be77 0xc00364be78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.115: INFO: Pod "webserver-deployment-c7997dcc8-4n2v7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4n2v7 webserver-deployment-c7997dcc8- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-c7997dcc8-4n2v7 85ddeb67-aace-4eca-ae88-09e01e87f0e1 4069495 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 736e218e-584a-4025-bd13-32b6be32d324 0xc00364bfa7 0xc00364bfa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.115: INFO: Pod "webserver-deployment-c7997dcc8-4xsvx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4xsvx webserver-deployment-c7997dcc8- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-c7997dcc8-4xsvx 479b5e5c-9118-44f8-945f-37537a3a5cd2 4069529 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 736e218e-584a-4025-bd13-32b6be32d324 0xc0032480d7 0xc0032480d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.115: INFO: Pod "webserver-deployment-c7997dcc8-b248n" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-b248n webserver-deployment-c7997dcc8- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-c7997dcc8-b248n d86b98bc-d8ef-4088-9ed1-c727e3d33109 4069513 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 736e218e-584a-4025-bd13-32b6be32d324 0xc003248207 0xc003248208}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.115: INFO: Pod "webserver-deployment-c7997dcc8-fwfjt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fwfjt webserver-deployment-c7997dcc8- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-c7997dcc8-fwfjt 7989a749-afb1-4454-b8e2-70b3df2faba2 4069460 0 2020-03-30 22:10:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 736e218e-584a-4025-bd13-32b6be32d324 0xc003248337 0xc003248338}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-30 22:10:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.116: INFO: Pod "webserver-deployment-c7997dcc8-gfk9z" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gfk9z webserver-deployment-c7997dcc8- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-c7997dcc8-gfk9z bd955ee9-24ed-4145-b620-1fda9f83d489 4069464 0 2020-03-30 22:10:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 736e218e-584a-4025-bd13-32b6be32d324 0xc0032484e7 0xc0032484e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-30 22:10:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.116: INFO: Pod "webserver-deployment-c7997dcc8-hx65l" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hx65l webserver-deployment-c7997dcc8- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-c7997dcc8-hx65l 2dcbcba2-cf79-49be-9a0c-cddcad187a90 4069508 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 736e218e-584a-4025-bd13-32b6be32d324 0xc003248667 0xc003248668}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.116: INFO: Pod "webserver-deployment-c7997dcc8-kzmjd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kzmjd webserver-deployment-c7997dcc8- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-c7997dcc8-kzmjd 3e0a4e38-249e-4e9e-9edf-5773b93da401 4069518 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 736e218e-584a-4025-bd13-32b6be32d324 0xc003248797 0xc003248798}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.116: INFO: Pod "webserver-deployment-c7997dcc8-n8jvx" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-n8jvx webserver-deployment-c7997dcc8- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-c7997dcc8-n8jvx 6c570906-41e7-4008-bbd4-f22a163a0404 4069437 0 2020-03-30 22:10:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 736e218e-584a-4025-bd13-32b6be32d324 0xc0032488c7 0xc0032488c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-03-30 22:10:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.116: INFO: Pod "webserver-deployment-c7997dcc8-p5wsm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-p5wsm webserver-deployment-c7997dcc8- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-c7997dcc8-p5wsm 82411394-3bed-4246-8376-ac2e9dfe51d9 4069493 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 736e218e-584a-4025-bd13-32b6be32d324 0xc003248a47 0xc003248a48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.116: INFO: Pod "webserver-deployment-c7997dcc8-pdbn9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-pdbn9 webserver-deployment-c7997dcc8- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-c7997dcc8-pdbn9 5bcd63bb-fc5b-4626-add2-c222943e7f1c 4069441 0 2020-03-30 22:10:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 736e218e-584a-4025-bd13-32b6be32d324 0xc003248b77 0xc003248b78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-30 22:10:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.117: INFO: Pod "webserver-deployment-c7997dcc8-qsx66" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qsx66 webserver-deployment-c7997dcc8- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-c7997dcc8-qsx66 b0ea6619-4c77-416e-a7e5-9d77377c5d92 4069520 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 736e218e-584a-4025-bd13-32b6be32d324 0xc003248d17 0xc003248d18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.117: INFO: Pod "webserver-deployment-c7997dcc8-rvw5w" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rvw5w webserver-deployment-c7997dcc8- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-c7997dcc8-rvw5w 31980c2d-97f7-49e2-921a-7d286b3710bd 4069522 0 2020-03-30 22:10:20 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 736e218e-584a-4025-bd13-32b6be32d324 0xc003248e47 0xc003248e48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Mar 30 22:10:21.117: INFO: Pod "webserver-deployment-c7997dcc8-scfgw" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-scfgw webserver-deployment-c7997dcc8- deployment-6555 /api/v1/namespaces/deployment-6555/pods/webserver-deployment-c7997dcc8-scfgw 9bacbb6c-7e0f-4c28-8964-fef801583e1f 4069454 0 2020-03-30 22:10:18 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 736e218e-584a-4025-bd13-32b6be32d324 0xc003248f77 0xc003248f78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ktgl9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ktgl9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ktgl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-03-30 22:10:18 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-03-30 22:10:18 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:10:21.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6555" for this suite. • [SLOW TEST:13.265 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":235,"skipped":3820,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:10:21.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Mar 30 22:10:21.752: INFO: Waiting up to 5m0s for pod "pod-c6bbc95b-bafc-4f74-a295-a4bee2decd6b" in namespace "emptydir-2552" to be "success or failure" Mar 30 22:10:21.778: INFO: Pod "pod-c6bbc95b-bafc-4f74-a295-a4bee2decd6b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.369608ms Mar 30 22:10:23.892: INFO: Pod "pod-c6bbc95b-bafc-4f74-a295-a4bee2decd6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.139328364s Mar 30 22:10:26.020: INFO: Pod "pod-c6bbc95b-bafc-4f74-a295-a4bee2decd6b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.267320864s Mar 30 22:10:28.807: INFO: Pod "pod-c6bbc95b-bafc-4f74-a295-a4bee2decd6b": Phase="Pending", Reason="", readiness=false. Elapsed: 7.054443725s Mar 30 22:10:31.037: INFO: Pod "pod-c6bbc95b-bafc-4f74-a295-a4bee2decd6b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.28416936s Mar 30 22:10:33.042: INFO: Pod "pod-c6bbc95b-bafc-4f74-a295-a4bee2decd6b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.289265024s Mar 30 22:10:35.136: INFO: Pod "pod-c6bbc95b-bafc-4f74-a295-a4bee2decd6b": Phase="Pending", Reason="", readiness=false. Elapsed: 13.383475687s Mar 30 22:10:37.219: INFO: Pod "pod-c6bbc95b-bafc-4f74-a295-a4bee2decd6b": Phase="Running", Reason="", readiness=true. Elapsed: 15.466381837s Mar 30 22:10:39.223: INFO: Pod "pod-c6bbc95b-bafc-4f74-a295-a4bee2decd6b": Phase="Running", Reason="", readiness=true. Elapsed: 17.470563003s Mar 30 22:10:41.226: INFO: Pod "pod-c6bbc95b-bafc-4f74-a295-a4bee2decd6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.473721791s STEP: Saw pod success Mar 30 22:10:41.226: INFO: Pod "pod-c6bbc95b-bafc-4f74-a295-a4bee2decd6b" satisfied condition "success or failure" Mar 30 22:10:41.228: INFO: Trying to get logs from node jerma-worker2 pod pod-c6bbc95b-bafc-4f74-a295-a4bee2decd6b container test-container: STEP: delete the pod Mar 30 22:10:41.293: INFO: Waiting for pod pod-c6bbc95b-bafc-4f74-a295-a4bee2decd6b to disappear Mar 30 22:10:41.299: INFO: Pod pod-c6bbc95b-bafc-4f74-a295-a4bee2decd6b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:10:41.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2552" for this suite. • [SLOW TEST:19.828 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3825,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:10:41.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-0a858d98-767d-4a75-885c-76df03b1d9e1 STEP: Creating a pod to test consume secrets Mar 30 22:10:41.371: INFO: Waiting up to 5m0s for pod "pod-secrets-57c50224-858d-4b84-a420-951b25bdf28b" in namespace "secrets-6637" to be "success or failure" Mar 30 22:10:41.387: INFO: Pod "pod-secrets-57c50224-858d-4b84-a420-951b25bdf28b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.658302ms Mar 30 22:10:43.391: INFO: Pod "pod-secrets-57c50224-858d-4b84-a420-951b25bdf28b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020074294s Mar 30 22:10:45.406: INFO: Pod "pod-secrets-57c50224-858d-4b84-a420-951b25bdf28b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035177861s STEP: Saw pod success Mar 30 22:10:45.406: INFO: Pod "pod-secrets-57c50224-858d-4b84-a420-951b25bdf28b" satisfied condition "success or failure" Mar 30 22:10:45.408: INFO: Trying to get logs from node jerma-worker pod pod-secrets-57c50224-858d-4b84-a420-951b25bdf28b container secret-volume-test: STEP: delete the pod Mar 30 22:10:45.438: INFO: Waiting for pod pod-secrets-57c50224-858d-4b84-a420-951b25bdf28b to disappear Mar 30 22:10:45.443: INFO: Pod pod-secrets-57c50224-858d-4b84-a420-951b25bdf28b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:10:45.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6637" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3844,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:10:45.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 30 22:10:45.517: INFO: Waiting up to 5m0s for pod "pod-39b60d04-f569-4ad2-b82c-90fe84055cc3" in namespace "emptydir-6798" to be "success or failure" Mar 30 22:10:45.538: INFO: Pod "pod-39b60d04-f569-4ad2-b82c-90fe84055cc3": Phase="Pending", Reason="", readiness=false. Elapsed: 21.069164ms Mar 30 22:10:47.542: INFO: Pod "pod-39b60d04-f569-4ad2-b82c-90fe84055cc3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025087429s Mar 30 22:10:49.546: INFO: Pod "pod-39b60d04-f569-4ad2-b82c-90fe84055cc3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029150842s STEP: Saw pod success Mar 30 22:10:49.546: INFO: Pod "pod-39b60d04-f569-4ad2-b82c-90fe84055cc3" satisfied condition "success or failure" Mar 30 22:10:49.549: INFO: Trying to get logs from node jerma-worker pod pod-39b60d04-f569-4ad2-b82c-90fe84055cc3 container test-container: STEP: delete the pod Mar 30 22:10:49.576: INFO: Waiting for pod pod-39b60d04-f569-4ad2-b82c-90fe84055cc3 to disappear Mar 30 22:10:49.586: INFO: Pod pod-39b60d04-f569-4ad2-b82c-90fe84055cc3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:10:49.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6798" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3872,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:10:49.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Mar 30 22:10:49.815: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:10:49.849: INFO: Number of nodes with available pods: 0 Mar 30 22:10:49.849: INFO: Node jerma-worker is running more than one daemon pod Mar 30 22:10:50.963: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:10:50.966: INFO: Number of nodes with available pods: 0 Mar 30 22:10:50.966: INFO: Node jerma-worker is running more than one daemon pod Mar 30 22:10:51.957: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:10:51.960: INFO: Number of nodes with available pods: 0 Mar 30 22:10:51.960: INFO: Node jerma-worker is running more than one daemon pod Mar 30 22:10:52.854: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:10:52.857: INFO: Number of nodes with available pods: 0 Mar 30 22:10:52.857: INFO: Node jerma-worker is running more than one daemon pod Mar 30 22:10:53.855: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:10:53.858: INFO: Number of nodes with available pods: 1 Mar 30 22:10:53.858: INFO: Node jerma-worker is running more than one daemon pod Mar 30 22:10:54.853: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:10:54.856: INFO: Number of nodes with available pods: 2 Mar 30 22:10:54.856: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Mar 30 22:10:54.875: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:10:54.878: INFO: Number of nodes with available pods: 1 Mar 30 22:10:54.878: INFO: Node jerma-worker2 is running more than one daemon pod Mar 30 22:10:55.931: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:10:56.035: INFO: Number of nodes with available pods: 1 Mar 30 22:10:56.035: INFO: Node jerma-worker2 is running more than one daemon pod Mar 30 22:10:56.883: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:10:56.887: INFO: Number of nodes with available pods: 1 Mar 30 22:10:56.887: INFO: Node jerma-worker2 is running more than one daemon pod Mar 30 22:10:57.904: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:10:57.907: INFO: Number of nodes with available pods: 1 Mar 30 22:10:57.907: INFO: Node jerma-worker2 is running more than one daemon pod Mar 30 22:10:58.883: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:10:58.887: INFO: Number of nodes with available pods: 1 Mar 30 22:10:58.887: INFO: Node jerma-worker2 is running more than one daemon pod Mar 30 22:10:59.883: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:10:59.887: INFO: Number of nodes with available pods: 1 Mar 30 22:10:59.887: INFO: Node jerma-worker2 is running more than one daemon pod Mar 30 22:11:00.883: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:11:00.887: INFO: Number of nodes with available pods: 2 Mar 30 22:11:00.887: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8958, will wait for the garbage collector to delete the pods Mar 30 22:11:00.949: INFO: Deleting DaemonSet.extensions daemon-set took: 6.12108ms Mar 30 22:11:01.249: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.242228ms Mar 30 22:11:09.553: INFO: Number of nodes with available pods: 0 Mar 30 22:11:09.553: INFO: Number of running nodes: 0, number of available pods: 0 Mar 30 22:11:09.556: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8958/daemonsets","resourceVersion":"4070026"},"items":null} Mar 30 22:11:09.559: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8958/pods","resourceVersion":"4070026"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:11:09.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8958" for this suite. • [SLOW TEST:19.984 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":239,"skipped":3872,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:11:09.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-d074f3c1-fdcb-488a-9b55-67ebf11c195a STEP: Creating secret with name s-test-opt-upd-f1c9bf35-58b4-4780-9cbd-baceb667c2f2 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-d074f3c1-fdcb-488a-9b55-67ebf11c195a STEP: Updating secret s-test-opt-upd-f1c9bf35-58b4-4780-9cbd-baceb667c2f2 STEP: Creating secret with name s-test-opt-create-50d97183-8e8e-4ec2-b6f1-ac391779b2fa STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:11:17.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4365" for this suite. • [SLOW TEST:8.230 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3881,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:11:17.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:11:24.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4416" for this suite. STEP: Destroying namespace "nsdeletetest-9782" for this suite. Mar 30 22:11:24.192: INFO: Namespace nsdeletetest-9782 was already deleted STEP: Destroying namespace "nsdeletetest-9468" for this suite. • [SLOW TEST:6.387 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":241,"skipped":3928,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:11:24.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 30 22:11:24.455: INFO: Waiting up to 5m0s for pod "downwardapi-volume-69046d4f-da00-46f5-ba1f-0f1a8846196c" in namespace "projected-1566" to be "success or failure" Mar 30 22:11:24.468: INFO: Pod "downwardapi-volume-69046d4f-da00-46f5-ba1f-0f1a8846196c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.954174ms Mar 30 22:11:26.493: INFO: Pod "downwardapi-volume-69046d4f-da00-46f5-ba1f-0f1a8846196c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03799501s Mar 30 22:11:28.497: INFO: Pod "downwardapi-volume-69046d4f-da00-46f5-ba1f-0f1a8846196c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041846725s STEP: Saw pod success Mar 30 22:11:28.497: INFO: Pod "downwardapi-volume-69046d4f-da00-46f5-ba1f-0f1a8846196c" satisfied condition "success or failure" Mar 30 22:11:28.500: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-69046d4f-da00-46f5-ba1f-0f1a8846196c container client-container: STEP: delete the pod Mar 30 22:11:28.517: INFO: Waiting for pod downwardapi-volume-69046d4f-da00-46f5-ba1f-0f1a8846196c to disappear Mar 30 22:11:28.522: INFO: Pod downwardapi-volume-69046d4f-da00-46f5-ba1f-0f1a8846196c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:11:28.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1566" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3969,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:11:28.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 30 22:11:28.949: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 30 22:11:30.996: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203088, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203088, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203089, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203088, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 30 22:11:34.028: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:11:34.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4250" for this suite. STEP: Destroying namespace "webhook-4250-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.599 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":243,"skipped":4049,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:11:34.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-910 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Mar 30 22:11:34.246: INFO: Found 0 stateful pods, waiting for 3 Mar 30 22:11:44.275: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 30 22:11:44.275: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 30 22:11:44.275: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Mar 30 22:11:54.263: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 30 22:11:54.263: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 30 22:11:54.263: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Mar 30 22:11:54.289: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Mar 30 22:12:04.344: INFO: Updating stateful set ss2 Mar 30 22:12:04.374: INFO: Waiting for Pod statefulset-910/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Mar 30 22:12:14.802: INFO: Found 2 stateful pods, waiting for 3 Mar 30 22:12:24.807: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Mar 30 22:12:24.807: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Mar 30 22:12:24.807: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Mar 30 22:12:24.830: INFO: Updating stateful set ss2 Mar 30 22:12:24.838: INFO: Waiting for Pod statefulset-910/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Mar 30 22:12:34.863: INFO: Updating stateful set ss2 Mar 30 22:12:34.919: INFO: Waiting for StatefulSet statefulset-910/ss2 to complete update Mar 30 22:12:34.919: INFO: Waiting for Pod statefulset-910/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Mar 30 22:12:44.927: INFO: Deleting all statefulset in ns statefulset-910 Mar 30 22:12:44.930: INFO: Scaling statefulset ss2 to 0 Mar 30 22:13:04.947: INFO: Waiting for statefulset status.replicas updated to 0 Mar 30 22:13:04.950: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:13:04.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-910" for this suite. • [SLOW TEST:90.840 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":244,"skipped":4052,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:13:04.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Mar 30 22:13:13.095: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 30 22:13:13.099: INFO: Pod pod-with-prestop-exec-hook still exists Mar 30 22:13:15.099: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 30 22:13:15.106: INFO: Pod pod-with-prestop-exec-hook still exists Mar 30 22:13:17.099: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Mar 30 22:13:17.104: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:13:17.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7049" for this suite. • [SLOW TEST:12.155 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":245,"skipped":4060,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:13:17.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Mar 30 22:13:21.775: INFO: Successfully updated pod "pod-update-84ebef25-d85e-4ffd-b4a8-a1f38256a9a0" STEP: verifying the updated pod is in kubernetes Mar 30 22:13:21.799: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:13:21.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2293" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4063,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:13:21.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 22:13:21.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Mar 30 22:13:22.458: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-30T22:13:22Z generation:1 name:name1 resourceVersion:4070924 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c9a4bd5f-f538-4240-8aed-927eff779c3e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Mar 30 22:13:32.464: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-30T22:13:32Z generation:1 name:name2 resourceVersion:4070966 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4b52cf57-7906-4680-8b91-cc11b4baabc8] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Mar 30 22:13:42.469: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-30T22:13:22Z generation:2 name:name1 resourceVersion:4070998 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c9a4bd5f-f538-4240-8aed-927eff779c3e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Mar 30 22:13:52.476: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-30T22:13:32Z generation:2 name:name2 resourceVersion:4071032 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4b52cf57-7906-4680-8b91-cc11b4baabc8] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Mar 30 22:14:02.482: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-30T22:13:22Z generation:2 name:name1 resourceVersion:4071062 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:c9a4bd5f-f538-4240-8aed-927eff779c3e] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Mar 30 22:14:12.490: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-03-30T22:13:32Z generation:2 name:name2 resourceVersion:4071093 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:4b52cf57-7906-4680-8b91-cc11b4baabc8] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:14:23.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-7289" for this suite. • [SLOW TEST:61.200 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":247,"skipped":4086,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:14:23.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 30 22:14:23.491: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Mar 30 22:14:25.929: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203263, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203263, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203263, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203263, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Mar 30 22:14:28.050: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203263, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203263, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203263, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203263, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 30 22:14:30.968: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 22:14:30.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2451-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:14:32.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5315" for this suite. STEP: Destroying namespace "webhook-5315-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.173 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":248,"skipped":4086,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:14:32.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 22:14:32.242: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:14:38.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7635" for this suite. • [SLOW TEST:5.891 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":249,"skipped":4134,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:14:38.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-8238 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8238 STEP: creating replication controller externalsvc in namespace services-8238 I0330 22:14:38.255552 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-8238, replica count: 2 I0330 22:14:41.305968 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0330 22:14:44.306227 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Mar 30 22:14:44.362: INFO: Creating new exec pod Mar 30 22:14:48.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8238 execpodrxfk5 -- /bin/sh -x -c nslookup nodeport-service' Mar 30 22:14:51.136: INFO: stderr: "I0330 22:14:51.030897 2887 log.go:172] (0xc00092abb0) (0xc0006d5f40) Create stream\nI0330 22:14:51.030932 2887 log.go:172] (0xc00092abb0) (0xc0006d5f40) Stream added, broadcasting: 1\nI0330 22:14:51.034217 2887 log.go:172] (0xc00092abb0) Reply frame received for 1\nI0330 22:14:51.034286 2887 log.go:172] (0xc00092abb0) (0xc0005f86e0) Create stream\nI0330 22:14:51.034300 2887 log.go:172] (0xc00092abb0) (0xc0005f86e0) Stream added, broadcasting: 3\nI0330 22:14:51.035298 2887 log.go:172] (0xc00092abb0) Reply frame received for 3\nI0330 22:14:51.035341 2887 log.go:172] (0xc00092abb0) (0xc0003094a0) Create stream\nI0330 22:14:51.035354 2887 log.go:172] (0xc00092abb0) (0xc0003094a0) Stream added, broadcasting: 5\nI0330 22:14:51.036201 2887 log.go:172] (0xc00092abb0) Reply frame received for 5\nI0330 22:14:51.122683 2887 log.go:172] (0xc00092abb0) Data frame received for 5\nI0330 22:14:51.122716 2887 log.go:172] (0xc0003094a0) (5) Data frame handling\nI0330 22:14:51.122736 2887 log.go:172] (0xc0003094a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0330 22:14:51.127392 2887 log.go:172] (0xc00092abb0) Data frame received for 3\nI0330 22:14:51.127412 2887 log.go:172] (0xc0005f86e0) (3) Data frame handling\nI0330 22:14:51.127427 2887 log.go:172] (0xc0005f86e0) (3) Data frame sent\nI0330 22:14:51.128338 2887 log.go:172] (0xc00092abb0) Data frame received for 3\nI0330 22:14:51.128358 2887 log.go:172] (0xc0005f86e0) (3) Data frame handling\nI0330 22:14:51.128377 2887 log.go:172] (0xc0005f86e0) (3) Data frame sent\nI0330 22:14:51.128677 2887 log.go:172] (0xc00092abb0) Data frame received for 5\nI0330 22:14:51.128691 2887 log.go:172] (0xc0003094a0) (5) Data frame handling\nI0330 22:14:51.128723 2887 log.go:172] (0xc00092abb0) Data frame received for 3\nI0330 22:14:51.128758 2887 log.go:172] (0xc0005f86e0) (3) Data frame handling\nI0330 22:14:51.131145 2887 log.go:172] (0xc00092abb0) Data frame received for 1\nI0330 22:14:51.131165 2887 log.go:172] (0xc0006d5f40) (1) Data frame handling\nI0330 22:14:51.131179 2887 log.go:172] (0xc0006d5f40) (1) Data frame sent\nI0330 22:14:51.131191 2887 log.go:172] (0xc00092abb0) (0xc0006d5f40) Stream removed, broadcasting: 1\nI0330 22:14:51.131217 2887 log.go:172] (0xc00092abb0) Go away received\nI0330 22:14:51.131502 2887 log.go:172] (0xc00092abb0) (0xc0006d5f40) Stream removed, broadcasting: 1\nI0330 22:14:51.131516 2887 log.go:172] (0xc00092abb0) (0xc0005f86e0) Stream removed, broadcasting: 3\nI0330 22:14:51.131522 2887 log.go:172] (0xc00092abb0) (0xc0003094a0) Stream removed, broadcasting: 5\n" Mar 30 22:14:51.136: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-8238.svc.cluster.local\tcanonical name = externalsvc.services-8238.svc.cluster.local.\nName:\texternalsvc.services-8238.svc.cluster.local\nAddress: 10.108.144.156\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8238, will wait for the garbage collector to delete the pods Mar 30 22:14:51.202: INFO: Deleting ReplicationController externalsvc took: 6.64641ms Mar 30 22:14:51.502: INFO: Terminating ReplicationController externalsvc pods took: 300.249712ms Mar 30 22:14:59.323: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:14:59.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8238" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:21.287 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":250,"skipped":4161,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:14:59.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 30 22:15:00.083: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Mar 30 22:15:02.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203300, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203300, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203300, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203300, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 30 22:15:05.137: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:15:17.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6886" for this suite. STEP: Destroying namespace "webhook-6886-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.082 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":251,"skipped":4167,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:15:17.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Mar 30 22:15:21.518: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-2306 PodName:pod-sharedvolume-86635c23-7e5d-43c8-bdb3-e6345484d13c ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Mar 30 22:15:21.518: INFO: >>> kubeConfig: /root/.kube/config I0330 22:15:21.553256 6 log.go:172] (0xc001759290) (0xc002300960) Create stream I0330 22:15:21.553337 6 log.go:172] (0xc001759290) (0xc002300960) Stream added, broadcasting: 1 I0330 22:15:21.554965 6 log.go:172] (0xc001759290) Reply frame received for 1 I0330 22:15:21.555000 6 log.go:172] (0xc001759290) (0xc0023e4a00) Create stream I0330 22:15:21.555016 6 log.go:172] (0xc001759290) (0xc0023e4a00) Stream added, broadcasting: 3 I0330 22:15:21.555778 6 log.go:172] (0xc001759290) Reply frame received for 3 I0330 22:15:21.555805 6 log.go:172] (0xc001759290) (0xc001496280) Create stream I0330 22:15:21.555814 6 log.go:172] (0xc001759290) (0xc001496280) Stream added, broadcasting: 5 I0330 22:15:21.556632 6 log.go:172] (0xc001759290) Reply frame received for 5 I0330 22:15:21.627148 6 log.go:172] (0xc001759290) Data frame received for 5 I0330 22:15:21.627186 6 log.go:172] (0xc001496280) (5) Data frame handling I0330 22:15:21.627210 6 log.go:172] (0xc001759290) Data frame received for 3 I0330 22:15:21.627224 6 log.go:172] (0xc0023e4a00) (3) Data frame handling I0330 22:15:21.627239 6 log.go:172] (0xc0023e4a00) (3) Data frame sent I0330 22:15:21.627256 6 log.go:172] (0xc001759290) Data frame received for 3 I0330 22:15:21.627269 6 log.go:172] (0xc0023e4a00) (3) Data frame handling I0330 22:15:21.629231 6 log.go:172] (0xc001759290) Data frame received for 1 I0330 22:15:21.629259 6 log.go:172] (0xc002300960) (1) Data frame handling I0330 22:15:21.629272 6 log.go:172] (0xc002300960) (1) Data frame sent I0330 22:15:21.629282 6 log.go:172] (0xc001759290) (0xc002300960) Stream removed, broadcasting: 1 I0330 22:15:21.629356 6 log.go:172] (0xc001759290) (0xc002300960) Stream removed, broadcasting: 1 I0330 22:15:21.629366 6 log.go:172] (0xc001759290) (0xc0023e4a00) Stream removed, broadcasting: 3 I0330 22:15:21.629496 6 log.go:172] (0xc001759290) (0xc001496280) Stream removed, broadcasting: 5 I0330 22:15:21.629526 6 log.go:172] (0xc001759290) Go away received Mar 30 22:15:21.629: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:15:21.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2306" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":252,"skipped":4170,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:15:21.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-9085/secret-test-d899473c-32f9-49f4-aa84-239d1e2406ad STEP: Creating a pod to test consume secrets Mar 30 22:15:21.752: INFO: Waiting up to 5m0s for pod "pod-configmaps-760df8ae-98a8-46af-87d4-576016d2eee5" in namespace "secrets-9085" to be "success or failure" Mar 30 22:15:21.763: INFO: Pod "pod-configmaps-760df8ae-98a8-46af-87d4-576016d2eee5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.971485ms Mar 30 22:15:23.767: INFO: Pod "pod-configmaps-760df8ae-98a8-46af-87d4-576016d2eee5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015015472s Mar 30 22:15:25.771: INFO: Pod "pod-configmaps-760df8ae-98a8-46af-87d4-576016d2eee5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01926674s STEP: Saw pod success Mar 30 22:15:25.771: INFO: Pod "pod-configmaps-760df8ae-98a8-46af-87d4-576016d2eee5" satisfied condition "success or failure" Mar 30 22:15:25.774: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-760df8ae-98a8-46af-87d4-576016d2eee5 container env-test: STEP: delete the pod Mar 30 22:15:25.858: INFO: Waiting for pod pod-configmaps-760df8ae-98a8-46af-87d4-576016d2eee5 to disappear Mar 30 22:15:25.863: INFO: Pod pod-configmaps-760df8ae-98a8-46af-87d4-576016d2eee5 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:15:25.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9085" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":4260,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:15:25.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Mar 30 22:15:32.602: INFO: 0 pods remaining Mar 30 22:15:32.602: INFO: 0 pods has nil DeletionTimestamp Mar 30 22:15:32.602: INFO: STEP: Gathering metrics W0330 22:15:33.516543 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 30 22:15:33.516: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:15:33.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8984" for this suite. • [SLOW TEST:8.028 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":254,"skipped":4261,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:15:33.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Mar 30 22:15:36.034: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Mar 30 22:15:38.045: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203336, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203336, loc:(*time.Location)(0x7d83a80)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203336, loc:(*time.Location)(0x7d83a80)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721203336, loc:(*time.Location)(0x7d83a80)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Mar 30 22:15:41.113: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Mar 30 22:15:45.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-3307 to-be-attached-pod -i -c=container1' Mar 30 22:15:45.274: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:15:45.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3307" for this suite. STEP: Destroying namespace "webhook-3307-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.484 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":255,"skipped":4264,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:15:45.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Mar 30 22:15:45.435: INFO: Waiting up to 5m0s for pod "client-containers-a3d47451-6c3a-4ec0-9d66-28d975caa297" in namespace "containers-9008" to be "success or failure" Mar 30 22:15:45.439: INFO: Pod "client-containers-a3d47451-6c3a-4ec0-9d66-28d975caa297": Phase="Pending", Reason="", readiness=false. Elapsed: 3.984714ms Mar 30 22:15:47.443: INFO: Pod "client-containers-a3d47451-6c3a-4ec0-9d66-28d975caa297": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008294878s Mar 30 22:15:49.446: INFO: Pod "client-containers-a3d47451-6c3a-4ec0-9d66-28d975caa297": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011840882s STEP: Saw pod success Mar 30 22:15:49.446: INFO: Pod "client-containers-a3d47451-6c3a-4ec0-9d66-28d975caa297" satisfied condition "success or failure" Mar 30 22:15:49.449: INFO: Trying to get logs from node jerma-worker2 pod client-containers-a3d47451-6c3a-4ec0-9d66-28d975caa297 container test-container: STEP: delete the pod Mar 30 22:15:49.543: INFO: Waiting for pod client-containers-a3d47451-6c3a-4ec0-9d66-28d975caa297 to disappear Mar 30 22:15:49.553: INFO: Pod client-containers-a3d47451-6c3a-4ec0-9d66-28d975caa297 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:15:49.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9008" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4264,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:15:49.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Mar 30 22:15:49.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6026' Mar 30 22:15:49.931: INFO: stderr: "" Mar 30 22:15:49.931: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 30 22:15:50.934: INFO: Selector matched 1 pods for map[app:agnhost] Mar 30 22:15:50.934: INFO: Found 0 / 1 Mar 30 22:15:51.936: INFO: Selector matched 1 pods for map[app:agnhost] Mar 30 22:15:51.936: INFO: Found 0 / 1 Mar 30 22:15:52.947: INFO: Selector matched 1 pods for map[app:agnhost] Mar 30 22:15:52.947: INFO: Found 1 / 1 Mar 30 22:15:52.947: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Mar 30 22:15:52.950: INFO: Selector matched 1 pods for map[app:agnhost] Mar 30 22:15:52.950: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 30 22:15:52.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-psbzc --namespace=kubectl-6026 -p {"metadata":{"annotations":{"x":"y"}}}' Mar 30 22:15:53.045: INFO: stderr: "" Mar 30 22:15:53.045: INFO: stdout: "pod/agnhost-master-psbzc patched\n" STEP: checking annotations Mar 30 22:15:53.048: INFO: Selector matched 1 pods for map[app:agnhost] Mar 30 22:15:53.048: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:15:53.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6026" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":257,"skipped":4264,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:15:53.055: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 22:15:53.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9761' Mar 30 22:15:53.341: INFO: stderr: "" Mar 30 22:15:53.341: INFO: stdout: "replicationcontroller/agnhost-master created\n" Mar 30 22:15:53.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9761' Mar 30 22:15:53.817: INFO: stderr: "" Mar 30 22:15:53.817: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Mar 30 22:15:54.949: INFO: Selector matched 1 pods for map[app:agnhost] Mar 30 22:15:54.949: INFO: Found 0 / 1 Mar 30 22:15:55.821: INFO: Selector matched 1 pods for map[app:agnhost] Mar 30 22:15:55.821: INFO: Found 0 / 1 Mar 30 22:15:56.822: INFO: Selector matched 1 pods for map[app:agnhost] Mar 30 22:15:56.822: INFO: Found 1 / 1 Mar 30 22:15:56.822: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Mar 30 22:15:56.825: INFO: Selector matched 1 pods for map[app:agnhost] Mar 30 22:15:56.825: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Mar 30 22:15:56.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-xj6lg --namespace=kubectl-9761' Mar 30 22:15:56.932: INFO: stderr: "" Mar 30 22:15:56.932: INFO: stdout: "Name: agnhost-master-xj6lg\nNamespace: kubectl-9761\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Mon, 30 Mar 2020 22:15:53 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.123\nIPs:\n IP: 10.244.2.123\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://9631b2467da5cf7af20339dbc509e9634f433c869f1b04765c4e5bd8479cdc8d\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 30 Mar 2020 22:15:56 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-nchh5 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-nchh5:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-nchh5\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-9761/agnhost-master-xj6lg to jerma-worker2\n Normal Pulled 2s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 0s kubelet, jerma-worker2 Started container agnhost-master\n" Mar 30 22:15:56.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-9761' Mar 30 22:15:57.055: INFO: stderr: "" Mar 30 22:15:57.055: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9761\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-xj6lg\n" Mar 30 22:15:57.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-9761' Mar 30 22:15:57.153: INFO: stderr: "" Mar 30 22:15:57.153: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9761\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.107.180.18\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.123:6379\nSession Affinity: None\nEvents: \n" Mar 30 22:15:57.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Mar 30 22:15:57.272: INFO: stderr: "" Mar 30 22:15:57.272: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Mon, 30 Mar 2020 22:15:54 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 30 Mar 2020 22:11:05 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 30 Mar 2020 22:11:05 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 30 Mar 2020 22:11:05 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 30 Mar 2020 22:11:05 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 15d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 15d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 15d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 15d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 15d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 15d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 15d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Mar 30 22:15:57.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9761' Mar 30 22:15:57.367: INFO: stderr: "" Mar 30 22:15:57.367: INFO: stdout: "Name: kubectl-9761\nLabels: e2e-framework=kubectl\n e2e-run=23a40af6-e42f-4314-9a7f-91a6516d1b41\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:15:57.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9761" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":258,"skipped":4274,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:15:57.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Mar 30 22:15:57.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9591' Mar 30 22:15:57.747: INFO: stderr: "" Mar 30 22:15:57.747: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 30 22:15:57.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9591' Mar 30 22:15:57.837: INFO: stderr: "" Mar 30 22:15:57.837: INFO: stdout: "update-demo-nautilus-7l4xk update-demo-nautilus-rvcxl " Mar 30 22:15:57.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7l4xk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9591' Mar 30 22:15:57.918: INFO: stderr: "" Mar 30 22:15:57.918: INFO: stdout: "" Mar 30 22:15:57.918: INFO: update-demo-nautilus-7l4xk is created but not running Mar 30 22:16:02.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9591' Mar 30 22:16:03.027: INFO: stderr: "" Mar 30 22:16:03.027: INFO: stdout: "update-demo-nautilus-7l4xk update-demo-nautilus-rvcxl " Mar 30 22:16:03.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7l4xk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9591' Mar 30 22:16:03.118: INFO: stderr: "" Mar 30 22:16:03.118: INFO: stdout: "true" Mar 30 22:16:03.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7l4xk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9591' Mar 30 22:16:03.238: INFO: stderr: "" Mar 30 22:16:03.238: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 30 22:16:03.238: INFO: validating pod update-demo-nautilus-7l4xk Mar 30 22:16:03.242: INFO: got data: { "image": "nautilus.jpg" } Mar 30 22:16:03.242: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 30 22:16:03.242: INFO: update-demo-nautilus-7l4xk is verified up and running Mar 30 22:16:03.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rvcxl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9591' Mar 30 22:16:03.331: INFO: stderr: "" Mar 30 22:16:03.331: INFO: stdout: "true" Mar 30 22:16:03.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rvcxl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9591' Mar 30 22:16:03.424: INFO: stderr: "" Mar 30 22:16:03.424: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 30 22:16:03.425: INFO: validating pod update-demo-nautilus-rvcxl Mar 30 22:16:03.429: INFO: got data: { "image": "nautilus.jpg" } Mar 30 22:16:03.429: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 30 22:16:03.429: INFO: update-demo-nautilus-rvcxl is verified up and running STEP: rolling-update to new replication controller Mar 30 22:16:03.432: INFO: scanned /root for discovery docs: Mar 30 22:16:03.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-9591' Mar 30 22:16:25.976: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Mar 30 22:16:25.976: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 30 22:16:25.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9591' Mar 30 22:16:26.090: INFO: stderr: "" Mar 30 22:16:26.090: INFO: stdout: "update-demo-kitten-8z4h2 update-demo-kitten-thwrd " Mar 30 22:16:26.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8z4h2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9591' Mar 30 22:16:26.181: INFO: stderr: "" Mar 30 22:16:26.181: INFO: stdout: "true" Mar 30 22:16:26.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-8z4h2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9591' Mar 30 22:16:26.273: INFO: stderr: "" Mar 30 22:16:26.273: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 30 22:16:26.273: INFO: validating pod update-demo-kitten-8z4h2 Mar 30 22:16:26.277: INFO: got data: { "image": "kitten.jpg" } Mar 30 22:16:26.277: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 30 22:16:26.277: INFO: update-demo-kitten-8z4h2 is verified up and running Mar 30 22:16:26.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-thwrd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9591' Mar 30 22:16:26.419: INFO: stderr: "" Mar 30 22:16:26.419: INFO: stdout: "true" Mar 30 22:16:26.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-thwrd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9591' Mar 30 22:16:26.516: INFO: stderr: "" Mar 30 22:16:26.516: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Mar 30 22:16:26.516: INFO: validating pod update-demo-kitten-thwrd Mar 30 22:16:26.520: INFO: got data: { "image": "kitten.jpg" } Mar 30 22:16:26.520: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Mar 30 22:16:26.520: INFO: update-demo-kitten-thwrd is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:16:26.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9591" for this suite. • [SLOW TEST:29.152 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":259,"skipped":4274,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:16:26.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-67f1591d-7c99-4630-a2d7-c4092ae95770 STEP: Creating a pod to test consume configMaps Mar 30 22:16:26.588: INFO: Waiting up to 5m0s for pod "pod-configmaps-5f8b5f57-e6c8-4cbf-9ebf-3ab9095d45a6" in namespace "configmap-3121" to be "success or failure" Mar 30 22:16:26.591: INFO: Pod "pod-configmaps-5f8b5f57-e6c8-4cbf-9ebf-3ab9095d45a6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.486195ms Mar 30 22:16:28.595: INFO: Pod "pod-configmaps-5f8b5f57-e6c8-4cbf-9ebf-3ab9095d45a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007511227s Mar 30 22:16:30.600: INFO: Pod "pod-configmaps-5f8b5f57-e6c8-4cbf-9ebf-3ab9095d45a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01157135s STEP: Saw pod success Mar 30 22:16:30.600: INFO: Pod "pod-configmaps-5f8b5f57-e6c8-4cbf-9ebf-3ab9095d45a6" satisfied condition "success or failure" Mar 30 22:16:30.603: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-5f8b5f57-e6c8-4cbf-9ebf-3ab9095d45a6 container configmap-volume-test: STEP: delete the pod Mar 30 22:16:30.624: INFO: Waiting for pod pod-configmaps-5f8b5f57-e6c8-4cbf-9ebf-3ab9095d45a6 to disappear Mar 30 22:16:30.628: INFO: Pod pod-configmaps-5f8b5f57-e6c8-4cbf-9ebf-3ab9095d45a6 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:16:30.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3121" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4274,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:16:30.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 30 22:16:30.713: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2eb026bc-799a-455c-98d2-d88da6a47072" in namespace "projected-8789" to be "success or failure" Mar 30 22:16:30.745: INFO: Pod "downwardapi-volume-2eb026bc-799a-455c-98d2-d88da6a47072": Phase="Pending", Reason="", readiness=false. Elapsed: 31.850861ms Mar 30 22:16:32.749: INFO: Pod "downwardapi-volume-2eb026bc-799a-455c-98d2-d88da6a47072": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035769919s Mar 30 22:16:34.753: INFO: Pod "downwardapi-volume-2eb026bc-799a-455c-98d2-d88da6a47072": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039494968s STEP: Saw pod success Mar 30 22:16:34.753: INFO: Pod "downwardapi-volume-2eb026bc-799a-455c-98d2-d88da6a47072" satisfied condition "success or failure" Mar 30 22:16:34.756: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2eb026bc-799a-455c-98d2-d88da6a47072 container client-container: STEP: delete the pod Mar 30 22:16:34.788: INFO: Waiting for pod downwardapi-volume-2eb026bc-799a-455c-98d2-d88da6a47072 to disappear Mar 30 22:16:34.793: INFO: Pod downwardapi-volume-2eb026bc-799a-455c-98d2-d88da6a47072 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:16:34.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8789" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4282,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:16:34.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:278 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Mar 30 22:16:34.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5481' Mar 30 22:16:35.072: INFO: stderr: "" Mar 30 22:16:35.072: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 30 22:16:35.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5481' Mar 30 22:16:35.202: INFO: stderr: "" Mar 30 22:16:35.202: INFO: stdout: "update-demo-nautilus-kpfr9 update-demo-nautilus-z6hnf " Mar 30 22:16:35.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kpfr9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5481' Mar 30 22:16:35.289: INFO: stderr: "" Mar 30 22:16:35.289: INFO: stdout: "" Mar 30 22:16:35.289: INFO: update-demo-nautilus-kpfr9 is created but not running Mar 30 22:16:40.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5481' Mar 30 22:16:40.395: INFO: stderr: "" Mar 30 22:16:40.395: INFO: stdout: "update-demo-nautilus-kpfr9 update-demo-nautilus-z6hnf " Mar 30 22:16:40.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kpfr9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5481' Mar 30 22:16:40.494: INFO: stderr: "" Mar 30 22:16:40.494: INFO: stdout: "true" Mar 30 22:16:40.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kpfr9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5481' Mar 30 22:16:40.579: INFO: stderr: "" Mar 30 22:16:40.580: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 30 22:16:40.580: INFO: validating pod update-demo-nautilus-kpfr9 Mar 30 22:16:40.583: INFO: got data: { "image": "nautilus.jpg" } Mar 30 22:16:40.583: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 30 22:16:40.583: INFO: update-demo-nautilus-kpfr9 is verified up and running Mar 30 22:16:40.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z6hnf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5481' Mar 30 22:16:40.678: INFO: stderr: "" Mar 30 22:16:40.678: INFO: stdout: "true" Mar 30 22:16:40.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z6hnf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5481' Mar 30 22:16:40.778: INFO: stderr: "" Mar 30 22:16:40.778: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 30 22:16:40.778: INFO: validating pod update-demo-nautilus-z6hnf Mar 30 22:16:40.782: INFO: got data: { "image": "nautilus.jpg" } Mar 30 22:16:40.782: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 30 22:16:40.782: INFO: update-demo-nautilus-z6hnf is verified up and running STEP: scaling down the replication controller Mar 30 22:16:40.785: INFO: scanned /root for discovery docs: Mar 30 22:16:40.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5481' Mar 30 22:16:41.915: INFO: stderr: "" Mar 30 22:16:41.915: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 30 22:16:41.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5481' Mar 30 22:16:42.012: INFO: stderr: "" Mar 30 22:16:42.012: INFO: stdout: "update-demo-nautilus-kpfr9 update-demo-nautilus-z6hnf " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 30 22:16:47.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5481' Mar 30 22:16:47.109: INFO: stderr: "" Mar 30 22:16:47.109: INFO: stdout: "update-demo-nautilus-kpfr9 update-demo-nautilus-z6hnf " STEP: Replicas for name=update-demo: expected=1 actual=2 Mar 30 22:16:52.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5481' Mar 30 22:16:52.204: INFO: stderr: "" Mar 30 22:16:52.204: INFO: stdout: "update-demo-nautilus-kpfr9 " Mar 30 22:16:52.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kpfr9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5481' Mar 30 22:16:52.296: INFO: stderr: "" Mar 30 22:16:52.296: INFO: stdout: "true" Mar 30 22:16:52.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kpfr9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5481' Mar 30 22:16:52.388: INFO: stderr: "" Mar 30 22:16:52.388: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 30 22:16:52.388: INFO: validating pod update-demo-nautilus-kpfr9 Mar 30 22:16:52.392: INFO: got data: { "image": "nautilus.jpg" } Mar 30 22:16:52.392: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 30 22:16:52.392: INFO: update-demo-nautilus-kpfr9 is verified up and running STEP: scaling up the replication controller Mar 30 22:16:52.395: INFO: scanned /root for discovery docs: Mar 30 22:16:52.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5481' Mar 30 22:16:53.544: INFO: stderr: "" Mar 30 22:16:53.544: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Mar 30 22:16:53.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5481' Mar 30 22:16:53.641: INFO: stderr: "" Mar 30 22:16:53.641: INFO: stdout: "update-demo-nautilus-jpnxt update-demo-nautilus-kpfr9 " Mar 30 22:16:53.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jpnxt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5481' Mar 30 22:16:53.729: INFO: stderr: "" Mar 30 22:16:53.729: INFO: stdout: "" Mar 30 22:16:53.729: INFO: update-demo-nautilus-jpnxt is created but not running Mar 30 22:16:58.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5481' Mar 30 22:16:58.826: INFO: stderr: "" Mar 30 22:16:58.826: INFO: stdout: "update-demo-nautilus-jpnxt update-demo-nautilus-kpfr9 " Mar 30 22:16:58.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jpnxt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5481' Mar 30 22:16:58.915: INFO: stderr: "" Mar 30 22:16:58.915: INFO: stdout: "true" Mar 30 22:16:58.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jpnxt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5481' Mar 30 22:16:59.004: INFO: stderr: "" Mar 30 22:16:59.004: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 30 22:16:59.004: INFO: validating pod update-demo-nautilus-jpnxt Mar 30 22:16:59.008: INFO: got data: { "image": "nautilus.jpg" } Mar 30 22:16:59.008: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 30 22:16:59.008: INFO: update-demo-nautilus-jpnxt is verified up and running Mar 30 22:16:59.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kpfr9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5481' Mar 30 22:16:59.096: INFO: stderr: "" Mar 30 22:16:59.096: INFO: stdout: "true" Mar 30 22:16:59.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kpfr9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5481' Mar 30 22:16:59.187: INFO: stderr: "" Mar 30 22:16:59.187: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Mar 30 22:16:59.187: INFO: validating pod update-demo-nautilus-kpfr9 Mar 30 22:16:59.190: INFO: got data: { "image": "nautilus.jpg" } Mar 30 22:16:59.190: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Mar 30 22:16:59.190: INFO: update-demo-nautilus-kpfr9 is verified up and running STEP: using delete to clean up resources Mar 30 22:16:59.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5481' Mar 30 22:16:59.305: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Mar 30 22:16:59.305: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Mar 30 22:16:59.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5481' Mar 30 22:16:59.406: INFO: stderr: "No resources found in kubectl-5481 namespace.\n" Mar 30 22:16:59.406: INFO: stdout: "" Mar 30 22:16:59.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5481 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 30 22:16:59.494: INFO: stderr: "" Mar 30 22:16:59.494: INFO: stdout: "update-demo-nautilus-jpnxt\nupdate-demo-nautilus-kpfr9\n" Mar 30 22:16:59.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5481' Mar 30 22:17:00.101: INFO: stderr: "No resources found in kubectl-5481 namespace.\n" Mar 30 22:17:00.101: INFO: stdout: "" Mar 30 22:17:00.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5481 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Mar 30 22:17:00.201: INFO: stderr: "" Mar 30 22:17:00.201: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:17:00.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5481" for this suite. • [SLOW TEST:25.407 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:328 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":262,"skipped":4291,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:17:00.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 30 22:17:00.288: INFO: Waiting up to 5m0s for pod "downwardapi-volume-79fb1665-7490-4722-9c20-5ab093a4cc26" in namespace "downward-api-5416" to be "success or failure" Mar 30 22:17:00.291: INFO: Pod "downwardapi-volume-79fb1665-7490-4722-9c20-5ab093a4cc26": Phase="Pending", Reason="", readiness=false. Elapsed: 3.101052ms Mar 30 22:17:02.295: INFO: Pod "downwardapi-volume-79fb1665-7490-4722-9c20-5ab093a4cc26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007636948s Mar 30 22:17:04.300: INFO: Pod "downwardapi-volume-79fb1665-7490-4722-9c20-5ab093a4cc26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012150251s STEP: Saw pod success Mar 30 22:17:04.300: INFO: Pod "downwardapi-volume-79fb1665-7490-4722-9c20-5ab093a4cc26" satisfied condition "success or failure" Mar 30 22:17:04.303: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-79fb1665-7490-4722-9c20-5ab093a4cc26 container client-container: STEP: delete the pod Mar 30 22:17:04.388: INFO: Waiting for pod downwardapi-volume-79fb1665-7490-4722-9c20-5ab093a4cc26 to disappear Mar 30 22:17:04.525: INFO: Pod downwardapi-volume-79fb1665-7490-4722-9c20-5ab093a4cc26 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:17:04.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5416" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4291,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:17:04.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-d3714ab0-fd5d-4153-9ab0-d7210f449729 STEP: Creating a pod to test consume secrets Mar 30 22:17:04.592: INFO: Waiting up to 5m0s for pod "pod-secrets-0d47ccba-a8c9-447d-95c8-fed8b9e58f6c" in namespace "secrets-7150" to be "success or failure" Mar 30 22:17:04.644: INFO: Pod "pod-secrets-0d47ccba-a8c9-447d-95c8-fed8b9e58f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 51.054884ms Mar 30 22:17:06.648: INFO: Pod "pod-secrets-0d47ccba-a8c9-447d-95c8-fed8b9e58f6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055126027s Mar 30 22:17:08.652: INFO: Pod "pod-secrets-0d47ccba-a8c9-447d-95c8-fed8b9e58f6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059407551s STEP: Saw pod success Mar 30 22:17:08.652: INFO: Pod "pod-secrets-0d47ccba-a8c9-447d-95c8-fed8b9e58f6c" satisfied condition "success or failure" Mar 30 22:17:08.655: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-0d47ccba-a8c9-447d-95c8-fed8b9e58f6c container secret-volume-test: STEP: delete the pod Mar 30 22:17:08.688: INFO: Waiting for pod pod-secrets-0d47ccba-a8c9-447d-95c8-fed8b9e58f6c to disappear Mar 30 22:17:08.706: INFO: Pod pod-secrets-0d47ccba-a8c9-447d-95c8-fed8b9e58f6c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:17:08.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7150" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4302,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:17:08.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 22:17:08.791: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Mar 30 22:17:08.810: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:08.820: INFO: Number of nodes with available pods: 0 Mar 30 22:17:08.820: INFO: Node jerma-worker is running more than one daemon pod Mar 30 22:17:09.866: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:09.870: INFO: Number of nodes with available pods: 0 Mar 30 22:17:09.870: INFO: Node jerma-worker is running more than one daemon pod Mar 30 22:17:10.825: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:10.830: INFO: Number of nodes with available pods: 0 Mar 30 22:17:10.830: INFO: Node jerma-worker is running more than one daemon pod Mar 30 22:17:11.825: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:11.828: INFO: Number of nodes with available pods: 0 Mar 30 22:17:11.828: INFO: Node jerma-worker is running more than one daemon pod Mar 30 22:17:12.826: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:12.830: INFO: Number of nodes with available pods: 2 Mar 30 22:17:12.830: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Mar 30 22:17:12.867: INFO: Wrong image for pod: daemon-set-dqf4g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:12.867: INFO: Wrong image for pod: daemon-set-lc95n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:12.882: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:13.886: INFO: Wrong image for pod: daemon-set-dqf4g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:13.886: INFO: Wrong image for pod: daemon-set-lc95n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:13.890: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:14.887: INFO: Wrong image for pod: daemon-set-dqf4g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:14.887: INFO: Wrong image for pod: daemon-set-lc95n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:14.891: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:15.886: INFO: Wrong image for pod: daemon-set-dqf4g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:15.886: INFO: Wrong image for pod: daemon-set-lc95n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:15.886: INFO: Pod daemon-set-lc95n is not available Mar 30 22:17:15.890: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:16.887: INFO: Wrong image for pod: daemon-set-dqf4g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:16.887: INFO: Wrong image for pod: daemon-set-lc95n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:16.887: INFO: Pod daemon-set-lc95n is not available Mar 30 22:17:16.890: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:17.886: INFO: Wrong image for pod: daemon-set-dqf4g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:17.886: INFO: Wrong image for pod: daemon-set-lc95n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:17.886: INFO: Pod daemon-set-lc95n is not available Mar 30 22:17:17.891: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:18.887: INFO: Wrong image for pod: daemon-set-dqf4g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:18.887: INFO: Wrong image for pod: daemon-set-lc95n. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:18.887: INFO: Pod daemon-set-lc95n is not available Mar 30 22:17:18.891: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:19.905: INFO: Wrong image for pod: daemon-set-dqf4g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:19.905: INFO: Pod daemon-set-kbcrs is not available Mar 30 22:17:19.915: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:20.944: INFO: Wrong image for pod: daemon-set-dqf4g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:20.944: INFO: Pod daemon-set-kbcrs is not available Mar 30 22:17:20.948: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:21.887: INFO: Wrong image for pod: daemon-set-dqf4g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:21.887: INFO: Pod daemon-set-kbcrs is not available Mar 30 22:17:21.891: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:22.887: INFO: Wrong image for pod: daemon-set-dqf4g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:22.890: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:23.887: INFO: Wrong image for pod: daemon-set-dqf4g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:23.887: INFO: Pod daemon-set-dqf4g is not available Mar 30 22:17:23.891: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:24.887: INFO: Wrong image for pod: daemon-set-dqf4g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:24.887: INFO: Pod daemon-set-dqf4g is not available Mar 30 22:17:24.891: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:25.887: INFO: Wrong image for pod: daemon-set-dqf4g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:25.887: INFO: Pod daemon-set-dqf4g is not available Mar 30 22:17:25.891: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:26.886: INFO: Wrong image for pod: daemon-set-dqf4g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:26.886: INFO: Pod daemon-set-dqf4g is not available Mar 30 22:17:26.891: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:27.887: INFO: Wrong image for pod: daemon-set-dqf4g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:27.887: INFO: Pod daemon-set-dqf4g is not available Mar 30 22:17:27.892: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:28.886: INFO: Wrong image for pod: daemon-set-dqf4g. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Mar 30 22:17:28.886: INFO: Pod daemon-set-dqf4g is not available Mar 30 22:17:28.891: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:29.886: INFO: Pod daemon-set-9gd7c is not available Mar 30 22:17:29.890: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Mar 30 22:17:29.894: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:29.897: INFO: Number of nodes with available pods: 1 Mar 30 22:17:29.897: INFO: Node jerma-worker2 is running more than one daemon pod Mar 30 22:17:30.902: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:30.905: INFO: Number of nodes with available pods: 1 Mar 30 22:17:30.905: INFO: Node jerma-worker2 is running more than one daemon pod Mar 30 22:17:31.902: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:31.905: INFO: Number of nodes with available pods: 1 Mar 30 22:17:31.905: INFO: Node jerma-worker2 is running more than one daemon pod Mar 30 22:17:32.902: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Mar 30 22:17:32.906: INFO: Number of nodes with available pods: 2 Mar 30 22:17:32.906: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5573, will wait for the garbage collector to delete the pods Mar 30 22:17:32.979: INFO: Deleting DaemonSet.extensions daemon-set took: 5.54673ms Mar 30 22:17:33.279: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.249085ms Mar 30 22:17:39.482: INFO: Number of nodes with available pods: 0 Mar 30 22:17:39.482: INFO: Number of running nodes: 0, number of available pods: 0 Mar 30 22:17:39.488: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5573/daemonsets","resourceVersion":"4072769"},"items":null} Mar 30 22:17:39.490: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5573/pods","resourceVersion":"4072769"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:17:39.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5573" for this suite. • [SLOW TEST:30.782 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":265,"skipped":4308,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:17:39.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Mar 30 22:17:39.622: INFO: Waiting up to 5m0s for pod "busybox-user-65534-0ef5811e-b970-45ce-ab98-c81f23389e42" in namespace "security-context-test-7806" to be "success or failure" Mar 30 22:17:39.630: INFO: Pod "busybox-user-65534-0ef5811e-b970-45ce-ab98-c81f23389e42": Phase="Pending", Reason="", readiness=false. Elapsed: 8.100577ms Mar 30 22:17:41.633: INFO: Pod "busybox-user-65534-0ef5811e-b970-45ce-ab98-c81f23389e42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01162061s Mar 30 22:17:43.638: INFO: Pod "busybox-user-65534-0ef5811e-b970-45ce-ab98-c81f23389e42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015683929s Mar 30 22:17:43.638: INFO: Pod "busybox-user-65534-0ef5811e-b970-45ce-ab98-c81f23389e42" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:17:43.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7806" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4311,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:17:43.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Mar 30 22:17:43.698: INFO: Waiting up to 5m0s for pod "pod-6637d498-5ca6-45e3-a2e9-161884af3c27" in namespace "emptydir-3365" to be "success or failure" Mar 30 22:17:43.702: INFO: Pod "pod-6637d498-5ca6-45e3-a2e9-161884af3c27": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065529ms Mar 30 22:17:45.706: INFO: Pod "pod-6637d498-5ca6-45e3-a2e9-161884af3c27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008088211s Mar 30 22:17:47.710: INFO: Pod "pod-6637d498-5ca6-45e3-a2e9-161884af3c27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01152502s STEP: Saw pod success Mar 30 22:17:47.710: INFO: Pod "pod-6637d498-5ca6-45e3-a2e9-161884af3c27" satisfied condition "success or failure" Mar 30 22:17:47.712: INFO: Trying to get logs from node jerma-worker pod pod-6637d498-5ca6-45e3-a2e9-161884af3c27 container test-container: STEP: delete the pod Mar 30 22:17:47.736: INFO: Waiting for pod pod-6637d498-5ca6-45e3-a2e9-161884af3c27 to disappear Mar 30 22:17:47.747: INFO: Pod pod-6637d498-5ca6-45e3-a2e9-161884af3c27 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:17:47.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3365" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4322,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:17:47.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-8808 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8808 STEP: Deleting pre-stop pod Mar 30 22:18:00.906: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:18:00.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8808" for this suite. • [SLOW TEST:13.189 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":268,"skipped":4324,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:18:00.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 30 22:18:01.019: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3e1cab4b-903b-4d36-b955-173d3f0a2dab" in namespace "projected-5019" to be "success or failure" Mar 30 22:18:01.026: INFO: Pod "downwardapi-volume-3e1cab4b-903b-4d36-b955-173d3f0a2dab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.415916ms Mar 30 22:18:03.106: INFO: Pod "downwardapi-volume-3e1cab4b-903b-4d36-b955-173d3f0a2dab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086424177s Mar 30 22:18:05.118: INFO: Pod "downwardapi-volume-3e1cab4b-903b-4d36-b955-173d3f0a2dab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098476742s STEP: Saw pod success Mar 30 22:18:05.118: INFO: Pod "downwardapi-volume-3e1cab4b-903b-4d36-b955-173d3f0a2dab" satisfied condition "success or failure" Mar 30 22:18:05.121: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3e1cab4b-903b-4d36-b955-173d3f0a2dab container client-container: STEP: delete the pod Mar 30 22:18:05.157: INFO: Waiting for pod downwardapi-volume-3e1cab4b-903b-4d36-b955-173d3f0a2dab to disappear Mar 30 22:18:05.175: INFO: Pod downwardapi-volume-3e1cab4b-903b-4d36-b955-173d3f0a2dab no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:18:05.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5019" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4348,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:18:05.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Mar 30 22:18:05.246: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc195340-06a7-41c5-b742-d447e9f87d3d" in namespace "projected-8536" to be "success or failure" Mar 30 22:18:05.266: INFO: Pod "downwardapi-volume-bc195340-06a7-41c5-b742-d447e9f87d3d": Phase="Pending", Reason="", readiness=false. Elapsed: 19.749666ms Mar 30 22:18:07.270: INFO: Pod "downwardapi-volume-bc195340-06a7-41c5-b742-d447e9f87d3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024062881s Mar 30 22:18:09.274: INFO: Pod "downwardapi-volume-bc195340-06a7-41c5-b742-d447e9f87d3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028043009s STEP: Saw pod success Mar 30 22:18:09.274: INFO: Pod "downwardapi-volume-bc195340-06a7-41c5-b742-d447e9f87d3d" satisfied condition "success or failure" Mar 30 22:18:09.277: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-bc195340-06a7-41c5-b742-d447e9f87d3d container client-container: STEP: delete the pod Mar 30 22:18:09.294: INFO: Waiting for pod downwardapi-volume-bc195340-06a7-41c5-b742-d447e9f87d3d to disappear Mar 30 22:18:09.298: INFO: Pod downwardapi-volume-bc195340-06a7-41c5-b742-d447e9f87d3d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:18:09.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8536" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4433,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:18:09.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-4k52n in namespace proxy-7028 I0330 22:18:09.441697 6 runners.go:189] Created replication controller with name: proxy-service-4k52n, namespace: proxy-7028, replica count: 1 I0330 22:18:10.492114 6 runners.go:189] proxy-service-4k52n Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0330 22:18:11.492348 6 runners.go:189] proxy-service-4k52n Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0330 22:18:12.492587 6 runners.go:189] proxy-service-4k52n Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0330 22:18:13.492795 6 runners.go:189] proxy-service-4k52n Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Mar 30 22:18:13.496: INFO: setup took 4.127500231s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Mar 30 22:18:13.504: INFO: (0) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 7.210851ms) Mar 30 22:18:13.506: INFO: (0) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:1080/proxy/: test<... (200; 9.279077ms) Mar 30 22:18:13.506: INFO: (0) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname2/proxy/: bar (200; 9.089906ms) Mar 30 22:18:13.506: INFO: (0) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:1080/proxy/: ... (200; 9.306586ms) Mar 30 22:18:13.506: INFO: (0) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 9.538686ms) Mar 30 22:18:13.507: INFO: (0) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 10.280335ms) Mar 30 22:18:13.507: INFO: (0) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname1/proxy/: foo (200; 10.538012ms) Mar 30 22:18:13.508: INFO: (0) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn/proxy/: test (200; 11.279529ms) Mar 30 22:18:13.508: INFO: (0) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname1/proxy/: foo (200; 11.350246ms) Mar 30 22:18:13.508: INFO: (0) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname2/proxy/: bar (200; 11.873584ms) Mar 30 22:18:13.509: INFO: (0) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 12.172451ms) Mar 30 22:18:13.514: INFO: (0) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname2/proxy/: tls qux (200; 17.845613ms) Mar 30 22:18:13.514: INFO: (0) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:460/proxy/: tls baz (200; 17.491815ms) Mar 30 22:18:13.514: INFO: (0) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:462/proxy/: tls qux (200; 17.796944ms) Mar 30 22:18:13.514: INFO: (0) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname1/proxy/: tls baz (200; 17.821587ms) Mar 30 22:18:13.514: INFO: (0) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:443/proxy/: test (200; 3.712901ms) Mar 30 22:18:13.518: INFO: (1) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:1080/proxy/: ... (200; 3.727025ms) Mar 30 22:18:13.519: INFO: (1) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:462/proxy/: tls qux (200; 4.060323ms) Mar 30 22:18:13.519: INFO: (1) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 4.119419ms) Mar 30 22:18:13.519: INFO: (1) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 4.237452ms) Mar 30 22:18:13.519: INFO: (1) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:443/proxy/: test<... (200; 5.393164ms) Mar 30 22:18:13.520: INFO: (1) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname2/proxy/: bar (200; 5.430836ms) Mar 30 22:18:13.520: INFO: (1) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:460/proxy/: tls baz (200; 5.407041ms) Mar 30 22:18:13.520: INFO: (1) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname2/proxy/: bar (200; 5.450492ms) Mar 30 22:18:13.520: INFO: (1) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname1/proxy/: foo (200; 5.721038ms) Mar 30 22:18:13.520: INFO: (1) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname2/proxy/: tls qux (200; 5.658714ms) Mar 30 22:18:13.520: INFO: (1) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname1/proxy/: foo (200; 5.634723ms) Mar 30 22:18:13.520: INFO: (1) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname1/proxy/: tls baz (200; 5.724892ms) Mar 30 22:18:13.523: INFO: (2) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 2.30169ms) Mar 30 22:18:13.525: INFO: (2) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:460/proxy/: tls baz (200; 4.058782ms) Mar 30 22:18:13.525: INFO: (2) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 4.391416ms) Mar 30 22:18:13.525: INFO: (2) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:1080/proxy/: test<... (200; 4.502377ms) Mar 30 22:18:13.525: INFO: (2) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 4.7208ms) Mar 30 22:18:13.525: INFO: (2) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn/proxy/: test (200; 4.88354ms) Mar 30 22:18:13.526: INFO: (2) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:443/proxy/: ... (200; 5.694463ms) Mar 30 22:18:13.527: INFO: (2) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname1/proxy/: foo (200; 6.334992ms) Mar 30 22:18:13.527: INFO: (2) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname1/proxy/: tls baz (200; 6.703384ms) Mar 30 22:18:13.527: INFO: (2) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname1/proxy/: foo (200; 6.806824ms) Mar 30 22:18:13.527: INFO: (2) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname2/proxy/: tls qux (200; 6.766813ms) Mar 30 22:18:13.527: INFO: (2) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname2/proxy/: bar (200; 6.733124ms) Mar 30 22:18:13.527: INFO: (2) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname2/proxy/: bar (200; 6.724465ms) Mar 30 22:18:13.531: INFO: (3) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 4.126088ms) Mar 30 22:18:13.532: INFO: (3) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname1/proxy/: foo (200; 4.248862ms) Mar 30 22:18:13.532: INFO: (3) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname2/proxy/: bar (200; 4.349873ms) Mar 30 22:18:13.532: INFO: (3) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname2/proxy/: tls qux (200; 4.567925ms) Mar 30 22:18:13.532: INFO: (3) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname1/proxy/: foo (200; 4.543149ms) Mar 30 22:18:13.532: INFO: (3) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname1/proxy/: tls baz (200; 4.567466ms) Mar 30 22:18:13.532: INFO: (3) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn/proxy/: test (200; 5.042036ms) Mar 30 22:18:13.532: INFO: (3) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 5.076392ms) Mar 30 22:18:13.532: INFO: (3) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 5.044409ms) Mar 30 22:18:13.533: INFO: (3) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:460/proxy/: tls baz (200; 5.27044ms) Mar 30 22:18:13.533: INFO: (3) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname2/proxy/: bar (200; 5.872555ms) Mar 30 22:18:13.533: INFO: (3) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:443/proxy/: ... (200; 6.061466ms) Mar 30 22:18:13.533: INFO: (3) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:1080/proxy/: test<... (200; 6.190932ms) Mar 30 22:18:13.533: INFO: (3) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:462/proxy/: tls qux (200; 6.122358ms) Mar 30 22:18:13.538: INFO: (4) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname2/proxy/: bar (200; 5.022133ms) Mar 30 22:18:13.539: INFO: (4) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 5.013543ms) Mar 30 22:18:13.539: INFO: (4) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:1080/proxy/: ... (200; 5.108077ms) Mar 30 22:18:13.539: INFO: (4) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:1080/proxy/: test<... (200; 5.154989ms) Mar 30 22:18:13.539: INFO: (4) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 5.419855ms) Mar 30 22:18:13.539: INFO: (4) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname2/proxy/: bar (200; 5.655653ms) Mar 30 22:18:13.539: INFO: (4) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:460/proxy/: tls baz (200; 5.79533ms) Mar 30 22:18:13.539: INFO: (4) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:462/proxy/: tls qux (200; 5.804699ms) Mar 30 22:18:13.539: INFO: (4) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname1/proxy/: tls baz (200; 5.843398ms) Mar 30 22:18:13.539: INFO: (4) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname1/proxy/: foo (200; 5.910384ms) Mar 30 22:18:13.539: INFO: (4) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn/proxy/: test (200; 5.99621ms) Mar 30 22:18:13.540: INFO: (4) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname2/proxy/: tls qux (200; 5.961122ms) Mar 30 22:18:13.540: INFO: (4) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 5.986911ms) Mar 30 22:18:13.540: INFO: (4) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname1/proxy/: foo (200; 6.037781ms) Mar 30 22:18:13.540: INFO: (4) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:443/proxy/: ... (200; 4.020571ms) Mar 30 22:18:13.544: INFO: (5) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname2/proxy/: bar (200; 4.097705ms) Mar 30 22:18:13.544: INFO: (5) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname2/proxy/: tls qux (200; 4.019092ms) Mar 30 22:18:13.544: INFO: (5) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname2/proxy/: bar (200; 4.140513ms) Mar 30 22:18:13.544: INFO: (5) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname1/proxy/: foo (200; 4.288385ms) Mar 30 22:18:13.544: INFO: (5) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn/proxy/: test (200; 4.39893ms) Mar 30 22:18:13.544: INFO: (5) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 4.536674ms) Mar 30 22:18:13.544: INFO: (5) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname1/proxy/: foo (200; 4.665836ms) Mar 30 22:18:13.545: INFO: (5) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 4.710321ms) Mar 30 22:18:13.545: INFO: (5) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname1/proxy/: tls baz (200; 4.709079ms) Mar 30 22:18:13.545: INFO: (5) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 4.78257ms) Mar 30 22:18:13.545: INFO: (5) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:1080/proxy/: test<... (200; 4.990637ms) Mar 30 22:18:13.547: INFO: (6) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 2.508663ms) Mar 30 22:18:13.547: INFO: (6) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:460/proxy/: tls baz (200; 2.643845ms) Mar 30 22:18:13.548: INFO: (6) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:443/proxy/: test<... (200; 3.662406ms) Mar 30 22:18:13.549: INFO: (6) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 3.655769ms) Mar 30 22:18:13.549: INFO: (6) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:1080/proxy/: ... (200; 3.712726ms) Mar 30 22:18:13.549: INFO: (6) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn/proxy/: test (200; 3.657914ms) Mar 30 22:18:13.549: INFO: (6) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 3.678804ms) Mar 30 22:18:13.549: INFO: (6) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 3.934198ms) Mar 30 22:18:13.549: INFO: (6) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:462/proxy/: tls qux (200; 4.077639ms) Mar 30 22:18:13.554: INFO: (6) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname2/proxy/: bar (200; 8.940398ms) Mar 30 22:18:13.554: INFO: (6) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname1/proxy/: foo (200; 9.066724ms) Mar 30 22:18:13.554: INFO: (6) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname1/proxy/: tls baz (200; 9.053044ms) Mar 30 22:18:13.554: INFO: (6) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname1/proxy/: foo (200; 9.555731ms) Mar 30 22:18:13.555: INFO: (6) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname2/proxy/: bar (200; 10.17145ms) Mar 30 22:18:13.555: INFO: (6) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname2/proxy/: tls qux (200; 10.414655ms) Mar 30 22:18:13.559: INFO: (7) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:460/proxy/: tls baz (200; 3.935075ms) Mar 30 22:18:13.559: INFO: (7) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn/proxy/: test (200; 4.058235ms) Mar 30 22:18:13.559: INFO: (7) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:1080/proxy/: ... (200; 4.02919ms) Mar 30 22:18:13.560: INFO: (7) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname1/proxy/: foo (200; 4.07448ms) Mar 30 22:18:13.560: INFO: (7) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 4.241049ms) Mar 30 22:18:13.560: INFO: (7) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname1/proxy/: tls baz (200; 4.467973ms) Mar 30 22:18:13.560: INFO: (7) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname1/proxy/: foo (200; 4.617579ms) Mar 30 22:18:13.560: INFO: (7) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 4.700824ms) Mar 30 22:18:13.560: INFO: (7) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname2/proxy/: bar (200; 4.633409ms) Mar 30 22:18:13.560: INFO: (7) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 4.640365ms) Mar 30 22:18:13.560: INFO: (7) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:1080/proxy/: test<... (200; 4.658556ms) Mar 30 22:18:13.560: INFO: (7) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:443/proxy/: ... (200; 3.766136ms) Mar 30 22:18:13.565: INFO: (8) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:1080/proxy/: test<... (200; 4.384443ms) Mar 30 22:18:13.565: INFO: (8) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 4.646648ms) Mar 30 22:18:13.565: INFO: (8) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname2/proxy/: tls qux (200; 4.735838ms) Mar 30 22:18:13.565: INFO: (8) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:443/proxy/: test (200; 4.971041ms) Mar 30 22:18:13.566: INFO: (8) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:462/proxy/: tls qux (200; 5.058361ms) Mar 30 22:18:13.566: INFO: (8) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:460/proxy/: tls baz (200; 5.125882ms) Mar 30 22:18:13.566: INFO: (8) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname2/proxy/: bar (200; 5.298157ms) Mar 30 22:18:13.566: INFO: (8) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname2/proxy/: bar (200; 5.563791ms) Mar 30 22:18:13.566: INFO: (8) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname1/proxy/: foo (200; 5.674895ms) Mar 30 22:18:13.567: INFO: (8) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname1/proxy/: tls baz (200; 6.071436ms) Mar 30 22:18:13.569: INFO: (9) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 1.851258ms) Mar 30 22:18:13.570: INFO: (9) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:1080/proxy/: ... (200; 3.215488ms) Mar 30 22:18:13.570: INFO: (9) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname1/proxy/: foo (200; 3.427007ms) Mar 30 22:18:13.570: INFO: (9) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 3.414303ms) Mar 30 22:18:13.570: INFO: (9) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:1080/proxy/: test<... (200; 3.455849ms) Mar 30 22:18:13.570: INFO: (9) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:462/proxy/: tls qux (200; 3.468919ms) Mar 30 22:18:13.570: INFO: (9) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn/proxy/: test (200; 3.478316ms) Mar 30 22:18:13.570: INFO: (9) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:460/proxy/: tls baz (200; 3.609685ms) Mar 30 22:18:13.571: INFO: (9) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 3.753733ms) Mar 30 22:18:13.571: INFO: (9) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:443/proxy/: test<... (200; 4.081625ms) Mar 30 22:18:13.575: INFO: (10) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname1/proxy/: tls baz (200; 4.070531ms) Mar 30 22:18:13.576: INFO: (10) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname1/proxy/: foo (200; 4.135698ms) Mar 30 22:18:13.576: INFO: (10) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn/proxy/: test (200; 4.27381ms) Mar 30 22:18:13.576: INFO: (10) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:460/proxy/: tls baz (200; 4.281176ms) Mar 30 22:18:13.576: INFO: (10) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 4.419051ms) Mar 30 22:18:13.576: INFO: (10) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 4.448166ms) Mar 30 22:18:13.576: INFO: (10) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:1080/proxy/: ... (200; 4.471353ms) Mar 30 22:18:13.576: INFO: (10) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:443/proxy/: test (200; 2.987778ms) Mar 30 22:18:13.579: INFO: (11) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 2.939122ms) Mar 30 22:18:13.579: INFO: (11) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:462/proxy/: tls qux (200; 3.015727ms) Mar 30 22:18:13.580: INFO: (11) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:1080/proxy/: ... (200; 3.370065ms) Mar 30 22:18:13.580: INFO: (11) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 3.38641ms) Mar 30 22:18:13.580: INFO: (11) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:1080/proxy/: test<... (200; 3.520374ms) Mar 30 22:18:13.580: INFO: (11) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname1/proxy/: foo (200; 4.253199ms) Mar 30 22:18:13.580: INFO: (11) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname1/proxy/: foo (200; 4.27359ms) Mar 30 22:18:13.580: INFO: (11) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname2/proxy/: bar (200; 4.276648ms) Mar 30 22:18:13.580: INFO: (11) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 4.312338ms) Mar 30 22:18:13.580: INFO: (11) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 4.417735ms) Mar 30 22:18:13.580: INFO: (11) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname2/proxy/: bar (200; 4.35083ms) Mar 30 22:18:13.581: INFO: (11) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:460/proxy/: tls baz (200; 4.67712ms) Mar 30 22:18:13.581: INFO: (11) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname2/proxy/: tls qux (200; 4.929284ms) Mar 30 22:18:13.581: INFO: (11) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname1/proxy/: tls baz (200; 4.971077ms) Mar 30 22:18:13.581: INFO: (11) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:443/proxy/: test<... (200; 2.841001ms) Mar 30 22:18:13.584: INFO: (12) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn/proxy/: test (200; 2.853569ms) Mar 30 22:18:13.585: INFO: (12) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:462/proxy/: tls qux (200; 3.201753ms) Mar 30 22:18:13.585: INFO: (12) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:443/proxy/: ... (200; 3.577908ms) Mar 30 22:18:13.585: INFO: (12) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 3.524711ms) Mar 30 22:18:13.585: INFO: (12) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:460/proxy/: tls baz (200; 3.610835ms) Mar 30 22:18:13.585: INFO: (12) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname1/proxy/: foo (200; 4.160558ms) Mar 30 22:18:13.585: INFO: (12) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname1/proxy/: tls baz (200; 4.166664ms) Mar 30 22:18:13.586: INFO: (12) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname1/proxy/: foo (200; 4.420505ms) Mar 30 22:18:13.586: INFO: (12) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname2/proxy/: bar (200; 4.42762ms) Mar 30 22:18:13.586: INFO: (12) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname2/proxy/: tls qux (200; 4.494082ms) Mar 30 22:18:13.586: INFO: (12) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname2/proxy/: bar (200; 4.540392ms) Mar 30 22:18:13.588: INFO: (13) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:1080/proxy/: ... (200; 2.037547ms) Mar 30 22:18:13.588: INFO: (13) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 2.477283ms) Mar 30 22:18:13.588: INFO: (13) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 2.560371ms) Mar 30 22:18:13.591: INFO: (13) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname2/proxy/: bar (200; 5.11445ms) Mar 30 22:18:13.591: INFO: (13) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:1080/proxy/: test<... (200; 5.239043ms) Mar 30 22:18:13.591: INFO: (13) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 5.234233ms) Mar 30 22:18:13.591: INFO: (13) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:462/proxy/: tls qux (200; 5.356768ms) Mar 30 22:18:13.591: INFO: (13) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:443/proxy/: test (200; 5.567494ms) Mar 30 22:18:13.592: INFO: (13) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 5.693378ms) Mar 30 22:18:13.592: INFO: (13) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:460/proxy/: tls baz (200; 5.735448ms) Mar 30 22:18:13.594: INFO: (14) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:1080/proxy/: test<... (200; 2.475365ms) Mar 30 22:18:13.595: INFO: (14) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:462/proxy/: tls qux (200; 2.940791ms) Mar 30 22:18:13.595: INFO: (14) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:460/proxy/: tls baz (200; 3.074724ms) Mar 30 22:18:13.595: INFO: (14) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 2.996195ms) Mar 30 22:18:13.596: INFO: (14) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname2/proxy/: bar (200; 4.287687ms) Mar 30 22:18:13.596: INFO: (14) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname1/proxy/: foo (200; 4.422848ms) Mar 30 22:18:13.596: INFO: (14) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname1/proxy/: foo (200; 4.412096ms) Mar 30 22:18:13.596: INFO: (14) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname1/proxy/: tls baz (200; 4.501763ms) Mar 30 22:18:13.596: INFO: (14) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname2/proxy/: tls qux (200; 4.446124ms) Mar 30 22:18:13.596: INFO: (14) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname2/proxy/: bar (200; 4.46847ms) Mar 30 22:18:13.596: INFO: (14) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:1080/proxy/: ... (200; 4.475235ms) Mar 30 22:18:13.596: INFO: (14) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 4.659467ms) Mar 30 22:18:13.596: INFO: (14) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 4.664254ms) Mar 30 22:18:13.597: INFO: (14) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:443/proxy/: test (200; 4.776328ms) Mar 30 22:18:13.597: INFO: (14) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 4.789951ms) Mar 30 22:18:13.600: INFO: (15) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:462/proxy/: tls qux (200; 3.042535ms) Mar 30 22:18:13.600: INFO: (15) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:1080/proxy/: test<... (200; 3.366657ms) Mar 30 22:18:13.600: INFO: (15) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 3.487444ms) Mar 30 22:18:13.600: INFO: (15) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 3.492824ms) Mar 30 22:18:13.600: INFO: (15) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 3.55438ms) Mar 30 22:18:13.600: INFO: (15) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:443/proxy/: test (200; 3.595767ms) Mar 30 22:18:13.600: INFO: (15) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 3.625946ms) Mar 30 22:18:13.600: INFO: (15) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:1080/proxy/: ... (200; 3.699928ms) Mar 30 22:18:13.601: INFO: (15) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:460/proxy/: tls baz (200; 3.832134ms) Mar 30 22:18:13.602: INFO: (15) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname2/proxy/: bar (200; 4.912851ms) Mar 30 22:18:13.602: INFO: (15) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname1/proxy/: tls baz (200; 5.045278ms) Mar 30 22:18:13.602: INFO: (15) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname1/proxy/: foo (200; 5.096616ms) Mar 30 22:18:13.602: INFO: (15) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname1/proxy/: foo (200; 5.107193ms) Mar 30 22:18:13.602: INFO: (15) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname2/proxy/: bar (200; 5.028062ms) Mar 30 22:18:13.602: INFO: (15) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname2/proxy/: tls qux (200; 5.085255ms) Mar 30 22:18:13.606: INFO: (16) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname2/proxy/: bar (200; 3.943601ms) Mar 30 22:18:13.606: INFO: (16) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname2/proxy/: tls qux (200; 4.198687ms) Mar 30 22:18:13.606: INFO: (16) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname2/proxy/: bar (200; 4.29745ms) Mar 30 22:18:13.606: INFO: (16) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname1/proxy/: tls baz (200; 4.480763ms) Mar 30 22:18:13.607: INFO: (16) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 4.825609ms) Mar 30 22:18:13.607: INFO: (16) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn/proxy/: test (200; 4.868769ms) Mar 30 22:18:13.607: INFO: (16) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:1080/proxy/: test<... (200; 4.855748ms) Mar 30 22:18:13.607: INFO: (16) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname1/proxy/: foo (200; 4.856329ms) Mar 30 22:18:13.607: INFO: (16) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 4.961578ms) Mar 30 22:18:13.607: INFO: (16) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:460/proxy/: tls baz (200; 4.96996ms) Mar 30 22:18:13.607: INFO: (16) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:1080/proxy/: ... (200; 5.027404ms) Mar 30 22:18:13.607: INFO: (16) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname1/proxy/: foo (200; 5.00423ms) Mar 30 22:18:13.607: INFO: (16) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:443/proxy/: test<... (200; 5.013819ms) Mar 30 22:18:13.612: INFO: (17) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:1080/proxy/: ... (200; 5.045082ms) Mar 30 22:18:13.612: INFO: (17) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 5.1096ms) Mar 30 22:18:13.612: INFO: (17) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname1/proxy/: foo (200; 5.091482ms) Mar 30 22:18:13.612: INFO: (17) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 5.237051ms) Mar 30 22:18:13.612: INFO: (17) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:443/proxy/: test (200; 5.314513ms) Mar 30 22:18:13.613: INFO: (17) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname2/proxy/: tls qux (200; 5.338365ms) Mar 30 22:18:13.613: INFO: (17) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:462/proxy/: tls qux (200; 5.547589ms) Mar 30 22:18:13.613: INFO: (17) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:460/proxy/: tls baz (200; 5.508184ms) Mar 30 22:18:13.613: INFO: (17) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname1/proxy/: tls baz (200; 5.602587ms) Mar 30 22:18:13.613: INFO: (17) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname2/proxy/: bar (200; 5.685796ms) Mar 30 22:18:13.613: INFO: (17) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname1/proxy/: foo (200; 5.722659ms) Mar 30 22:18:13.613: INFO: (17) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname2/proxy/: bar (200; 5.721264ms) Mar 30 22:18:13.618: INFO: (18) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 5.402272ms) Mar 30 22:18:13.621: INFO: (18) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:1080/proxy/: ... (200; 7.747937ms) Mar 30 22:18:13.621: INFO: (18) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:1080/proxy/: test<... (200; 7.819328ms) Mar 30 22:18:13.622: INFO: (18) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 8.776949ms) Mar 30 22:18:13.622: INFO: (18) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:462/proxy/: tls qux (200; 8.942386ms) Mar 30 22:18:13.622: INFO: (18) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:443/proxy/: test (200; 9.310518ms) Mar 30 22:18:13.623: INFO: (18) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname2/proxy/: bar (200; 10.070464ms) Mar 30 22:18:13.623: INFO: (18) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname1/proxy/: foo (200; 10.005482ms) Mar 30 22:18:13.623: INFO: (18) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname1/proxy/: tls baz (200; 10.111938ms) Mar 30 22:18:13.623: INFO: (18) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname1/proxy/: foo (200; 10.205621ms) Mar 30 22:18:13.623: INFO: (18) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname2/proxy/: tls qux (200; 10.176174ms) Mar 30 22:18:13.626: INFO: (19) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 2.742925ms) Mar 30 22:18:13.626: INFO: (19) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:443/proxy/: ... (200; 3.455435ms) Mar 30 22:18:13.627: INFO: (19) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:1080/proxy/: test<... (200; 3.891496ms) Mar 30 22:18:13.627: INFO: (19) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn:162/proxy/: bar (200; 3.932384ms) Mar 30 22:18:13.627: INFO: (19) /api/v1/namespaces/proxy-7028/pods/http:proxy-service-4k52n-l8ngn:160/proxy/: foo (200; 3.963381ms) Mar 30 22:18:13.628: INFO: (19) /api/v1/namespaces/proxy-7028/pods/proxy-service-4k52n-l8ngn/proxy/: test (200; 4.110274ms) Mar 30 22:18:13.628: INFO: (19) /api/v1/namespaces/proxy-7028/pods/https:proxy-service-4k52n-l8ngn:462/proxy/: tls qux (200; 4.550438ms) Mar 30 22:18:13.628: INFO: (19) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname1/proxy/: tls baz (200; 5.083427ms) Mar 30 22:18:13.628: INFO: (19) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname2/proxy/: bar (200; 5.029926ms) Mar 30 22:18:13.629: INFO: (19) /api/v1/namespaces/proxy-7028/services/http:proxy-service-4k52n:portname1/proxy/: foo (200; 5.123905ms) Mar 30 22:18:13.629: INFO: (19) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname2/proxy/: bar (200; 5.123156ms) Mar 30 22:18:13.629: INFO: (19) /api/v1/namespaces/proxy-7028/services/proxy-service-4k52n:portname1/proxy/: foo (200; 5.307819ms) Mar 30 22:18:13.629: INFO: (19) /api/v1/namespaces/proxy-7028/services/https:proxy-service-4k52n:tlsportname2/proxy/: tls qux (200; 5.26104ms) STEP: deleting ReplicationController proxy-service-4k52n in namespace proxy-7028, will wait for the garbage collector to delete the pods Mar 30 22:18:13.687: INFO: Deleting ReplicationController proxy-service-4k52n took: 6.52431ms Mar 30 22:18:13.988: INFO: Terminating ReplicationController proxy-service-4k52n pods took: 300.235917ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:18:19.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7028" for this suite. • [SLOW TEST:10.277 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":271,"skipped":4447,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:18:19.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0330 22:18:30.777464 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Mar 30 22:18:30.777: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:18:30.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3997" for this suite. • [SLOW TEST:11.186 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":272,"skipped":4463,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:18:30.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Mar 30 22:18:30.862: INFO: Waiting up to 5m0s for pod "pod-edd6fe4c-318b-499b-b2c1-9c5b07bcef06" in namespace "emptydir-8484" to be "success or failure" Mar 30 22:18:30.864: INFO: Pod "pod-edd6fe4c-318b-499b-b2c1-9c5b07bcef06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.332303ms Mar 30 22:18:32.867: INFO: Pod "pod-edd6fe4c-318b-499b-b2c1-9c5b07bcef06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005486492s Mar 30 22:18:34.874: INFO: Pod "pod-edd6fe4c-318b-499b-b2c1-9c5b07bcef06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012390266s STEP: Saw pod success Mar 30 22:18:34.874: INFO: Pod "pod-edd6fe4c-318b-499b-b2c1-9c5b07bcef06" satisfied condition "success or failure" Mar 30 22:18:34.877: INFO: Trying to get logs from node jerma-worker pod pod-edd6fe4c-318b-499b-b2c1-9c5b07bcef06 container test-container: STEP: delete the pod Mar 30 22:18:34.893: INFO: Waiting for pod pod-edd6fe4c-318b-499b-b2c1-9c5b07bcef06 to disappear Mar 30 22:18:34.920: INFO: Pod pod-edd6fe4c-318b-499b-b2c1-9c5b07bcef06 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:18:34.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8484" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4493,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:18:34.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Mar 30 22:18:35.000: INFO: Waiting up to 5m0s for pod "pod-25ce3906-c2d6-4b2a-9b24-b47013cfbc1f" in namespace "emptydir-6243" to be "success or failure" Mar 30 22:18:35.006: INFO: Pod "pod-25ce3906-c2d6-4b2a-9b24-b47013cfbc1f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051237ms Mar 30 22:18:37.094: INFO: Pod "pod-25ce3906-c2d6-4b2a-9b24-b47013cfbc1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093427127s Mar 30 22:18:39.097: INFO: Pod "pod-25ce3906-c2d6-4b2a-9b24-b47013cfbc1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097027106s STEP: Saw pod success Mar 30 22:18:39.097: INFO: Pod "pod-25ce3906-c2d6-4b2a-9b24-b47013cfbc1f" satisfied condition "success or failure" Mar 30 22:18:39.100: INFO: Trying to get logs from node jerma-worker pod pod-25ce3906-c2d6-4b2a-9b24-b47013cfbc1f container test-container: STEP: delete the pod Mar 30 22:18:39.144: INFO: Waiting for pod pod-25ce3906-c2d6-4b2a-9b24-b47013cfbc1f to disappear Mar 30 22:18:39.153: INFO: Pod pod-25ce3906-c2d6-4b2a-9b24-b47013cfbc1f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:18:39.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6243" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4501,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:18:39.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-689.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-689.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-689.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-689.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-689.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-689.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 30 22:18:45.294: INFO: DNS probes using dns-689/dns-test-5f0aa29e-c821-4bf9-9895-e3f3131eb105 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:18:45.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-689" for this suite. • [SLOW TEST:6.182 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":275,"skipped":4507,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:18:45.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4342.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4342.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4342.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4342.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-4342.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4342.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Mar 30 22:18:51.483: INFO: DNS probes using dns-4342/dns-test-97bd42c8-4f20-4f27-9b88-b40e699bd2b4 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:18:51.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4342" for this suite. • [SLOW TEST:6.270 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":276,"skipped":4555,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Mar 30 22:18:51.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-4ef4005d-2ff7-4095-8a7c-ca1b4ddfba01 STEP: Creating secret with name secret-projected-all-test-volume-5886253a-d9d1-4a79-83a2-30aee43fc868 STEP: Creating a pod to test Check all projections for projected volume plugin Mar 30 22:18:52.077: INFO: Waiting up to 5m0s for pod "projected-volume-d94058be-3b79-4a4b-8738-cf358334305a" in namespace "projected-4735" to be "success or failure" Mar 30 22:18:52.115: INFO: Pod "projected-volume-d94058be-3b79-4a4b-8738-cf358334305a": Phase="Pending", Reason="", readiness=false. Elapsed: 38.029856ms Mar 30 22:18:54.172: INFO: Pod "projected-volume-d94058be-3b79-4a4b-8738-cf358334305a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095465409s Mar 30 22:18:56.177: INFO: Pod "projected-volume-d94058be-3b79-4a4b-8738-cf358334305a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100169486s STEP: Saw pod success Mar 30 22:18:56.177: INFO: Pod "projected-volume-d94058be-3b79-4a4b-8738-cf358334305a" satisfied condition "success or failure" Mar 30 22:18:56.180: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-d94058be-3b79-4a4b-8738-cf358334305a container projected-all-volume-test: STEP: delete the pod Mar 30 22:18:56.233: INFO: Waiting for pod projected-volume-d94058be-3b79-4a4b-8738-cf358334305a to disappear Mar 30 22:18:56.243: INFO: Pod projected-volume-d94058be-3b79-4a4b-8738-cf358334305a no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Mar 30 22:18:56.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4735" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4565,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} Mar 30 22:18:56.251: INFO: Running AfterSuite actions on all nodes Mar 30 22:18:56.251: INFO: Running AfterSuite actions on node 1 Mar 30 22:18:56.251: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":277,"skipped":4565,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} Summarizing 1 Failure: [Fail] [sig-cli] Kubectl client Kubectl logs [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1517 Ran 278 of 4843 Specs in 4326.836 seconds FAIL! -- 277 Passed | 1 Failed | 0 Pending | 4565 Skipped --- FAIL: TestE2E (4326.91s) FAIL