I0512 16:14:03.488724 7 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0512 16:14:03.489054 7 e2e.go:109] Starting e2e run "9c81ead3-a3ef-410f-8702-87048a93e1d6" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589300042 - Will randomize all specs Will run 278 of 4842 specs May 12 16:14:03.552: INFO: >>> kubeConfig: /root/.kube/config May 12 16:14:03.557: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 12 16:14:03.581: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 12 16:14:03.614: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 12 16:14:03.614: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 12 16:14:03.614: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 12 16:14:03.626: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 12 16:14:03.626: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 12 16:14:03.626: INFO: e2e test version: v1.17.4 May 12 16:14:03.627: INFO: kube-apiserver version: v1.17.2 May 12 16:14:03.627: INFO: >>> kubeConfig: /root/.kube/config May 12 16:14:03.634: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:14:03.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api May 12 16:14:04.042: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 12 16:14:04.049: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a1e1b685-3bfe-46a7-ad2b-7a4ca3f8bdd9" in namespace "downward-api-5959" to be "success or failure" May 12 16:14:04.106: INFO: Pod "downwardapi-volume-a1e1b685-3bfe-46a7-ad2b-7a4ca3f8bdd9": Phase="Pending", Reason="", readiness=false. Elapsed: 56.725107ms May 12 16:14:06.542: INFO: Pod "downwardapi-volume-a1e1b685-3bfe-46a7-ad2b-7a4ca3f8bdd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.493006267s May 12 16:14:08.546: INFO: Pod "downwardapi-volume-a1e1b685-3bfe-46a7-ad2b-7a4ca3f8bdd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.496982441s May 12 16:14:10.550: INFO: Pod "downwardapi-volume-a1e1b685-3bfe-46a7-ad2b-7a4ca3f8bdd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.50101354s STEP: Saw pod success May 12 16:14:10.550: INFO: Pod "downwardapi-volume-a1e1b685-3bfe-46a7-ad2b-7a4ca3f8bdd9" satisfied condition "success or failure" May 12 16:14:10.554: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-a1e1b685-3bfe-46a7-ad2b-7a4ca3f8bdd9 container client-container: STEP: delete the pod May 12 16:14:10.777: INFO: Waiting for pod downwardapi-volume-a1e1b685-3bfe-46a7-ad2b-7a4ca3f8bdd9 to disappear May 12 16:14:10.850: INFO: Pod downwardapi-volume-a1e1b685-3bfe-46a7-ad2b-7a4ca3f8bdd9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:14:10.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5959" for this suite. • [SLOW TEST:7.224 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":22,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:14:10.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:14:17.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4421" for this suite. • [SLOW TEST:6.562 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":40,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:14:17.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 16:14:17.519: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"95825c5f-c673-47d2-8c06-5f455260458d", Controller:(*bool)(0xc00290c3c2), BlockOwnerDeletion:(*bool)(0xc00290c3c3)}} May 12 16:14:17.536: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"32853772-bdd2-4d47-952f-0c7a601d2705", Controller:(*bool)(0xc0028a292a), BlockOwnerDeletion:(*bool)(0xc0028a292b)}} May 12 16:14:17.593: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"980e9f57-843d-4a18-a01c-3b5ff6b96921", Controller:(*bool)(0xc00290c56a), BlockOwnerDeletion:(*bool)(0xc00290c56b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:14:22.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9788" for this suite. • [SLOW TEST:5.243 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":3,"skipped":50,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:14:22.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 16:14:23.532: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 12 16:14:26.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-828 create -f -' May 12 16:14:32.889: INFO: stderr: "" May 12 16:14:32.889: INFO: stdout: "e2e-test-crd-publish-openapi-9769-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 12 16:14:32.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-828 delete e2e-test-crd-publish-openapi-9769-crds test-cr' May 12 16:14:33.459: INFO: stderr: "" May 12 16:14:33.459: INFO: stdout: "e2e-test-crd-publish-openapi-9769-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 12 16:14:33.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-828 apply -f -' May 12 16:14:34.703: INFO: stderr: "" May 12 16:14:34.703: INFO: stdout: "e2e-test-crd-publish-openapi-9769-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 12 16:14:34.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-828 delete e2e-test-crd-publish-openapi-9769-crds test-cr' May 12 16:14:34.946: INFO: stderr: "" May 12 16:14:34.946: INFO: stdout: "e2e-test-crd-publish-openapi-9769-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 12 16:14:34.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9769-crds' May 12 16:14:35.996: INFO: stderr: "" May 12 16:14:35.996: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9769-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:14:38.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-828" for this suite. • [SLOW TEST:15.519 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":4,"skipped":84,"failed":0} SS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:14:38.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 12 16:14:47.520: INFO: Successfully updated pod "pod-update-activedeadlineseconds-68f3a289-3214-476c-8ca3-908d1114c008" May 12 16:14:47.520: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-68f3a289-3214-476c-8ca3-908d1114c008" in namespace "pods-5406" to be "terminated due to deadline exceeded" May 12 16:14:47.567: INFO: Pod "pod-update-activedeadlineseconds-68f3a289-3214-476c-8ca3-908d1114c008": Phase="Running", Reason="", readiness=true. Elapsed: 46.837047ms May 12 16:14:49.615: INFO: Pod "pod-update-activedeadlineseconds-68f3a289-3214-476c-8ca3-908d1114c008": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.095020538s May 12 16:14:49.615: INFO: Pod "pod-update-activedeadlineseconds-68f3a289-3214-476c-8ca3-908d1114c008" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:14:49.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5406" for this suite. • [SLOW TEST:12.126 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":86,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:14:50.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 16:14:59.341: INFO: Waiting up to 5m0s for pod "client-envvars-e69626ad-9194-493e-b173-fca879e347fa" in namespace "pods-973" to be "success or failure" May 12 16:14:59.480: INFO: Pod "client-envvars-e69626ad-9194-493e-b173-fca879e347fa": Phase="Pending", Reason="", readiness=false. Elapsed: 139.730521ms May 12 16:15:01.525: INFO: Pod "client-envvars-e69626ad-9194-493e-b173-fca879e347fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184879038s May 12 16:15:03.586: INFO: Pod "client-envvars-e69626ad-9194-493e-b173-fca879e347fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.245756067s May 12 16:15:05.636: INFO: Pod "client-envvars-e69626ad-9194-493e-b173-fca879e347fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.295323875s May 12 16:15:07.676: INFO: Pod "client-envvars-e69626ad-9194-493e-b173-fca879e347fa": Phase="Running", Reason="", readiness=true. Elapsed: 8.335148355s May 12 16:15:09.680: INFO: Pod "client-envvars-e69626ad-9194-493e-b173-fca879e347fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.339363395s STEP: Saw pod success May 12 16:15:09.680: INFO: Pod "client-envvars-e69626ad-9194-493e-b173-fca879e347fa" satisfied condition "success or failure" May 12 16:15:09.683: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-e69626ad-9194-493e-b173-fca879e347fa container env3cont: STEP: delete the pod May 12 16:15:09.827: INFO: Waiting for pod client-envvars-e69626ad-9194-493e-b173-fca879e347fa to disappear May 12 16:15:09.861: INFO: Pod client-envvars-e69626ad-9194-493e-b173-fca879e347fa no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:15:09.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-973" for this suite. • [SLOW TEST:19.557 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":94,"failed":0} [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:15:09.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy May 12 16:15:10.414: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix634923096/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:15:10.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4569" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":7,"skipped":94,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:15:10.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 12 16:15:10.749: INFO: Waiting up to 5m0s for pod "downwardapi-volume-654bc1e0-a020-4375-b685-5a68275ab31b" in namespace "projected-7071" to be "success or failure" May 12 16:15:10.771: INFO: Pod "downwardapi-volume-654bc1e0-a020-4375-b685-5a68275ab31b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.657454ms May 12 16:15:12.822: INFO: Pod "downwardapi-volume-654bc1e0-a020-4375-b685-5a68275ab31b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073602893s May 12 16:15:14.826: INFO: Pod "downwardapi-volume-654bc1e0-a020-4375-b685-5a68275ab31b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077043664s May 12 16:15:16.829: INFO: Pod "downwardapi-volume-654bc1e0-a020-4375-b685-5a68275ab31b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.080618971s STEP: Saw pod success May 12 16:15:16.829: INFO: Pod "downwardapi-volume-654bc1e0-a020-4375-b685-5a68275ab31b" satisfied condition "success or failure" May 12 16:15:17.166: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-654bc1e0-a020-4375-b685-5a68275ab31b container client-container: STEP: delete the pod May 12 16:15:18.160: INFO: Waiting for pod downwardapi-volume-654bc1e0-a020-4375-b685-5a68275ab31b to disappear May 12 16:15:18.410: INFO: Pod downwardapi-volume-654bc1e0-a020-4375-b685-5a68275ab31b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:15:18.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7071" for this suite. • [SLOW TEST:8.016 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":153,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:15:18.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4799 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 16:15:19.746: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 16:15:47.067: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.43 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4799 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 16:15:47.067: INFO: >>> kubeConfig: /root/.kube/config I0512 16:15:47.091740 7 log.go:172] (0xc0043ba4d0) (0xc0026805a0) Create stream I0512 16:15:47.091777 7 log.go:172] (0xc0043ba4d0) (0xc0026805a0) Stream added, broadcasting: 1 I0512 16:15:47.093712 7 log.go:172] (0xc0043ba4d0) Reply frame received for 1 I0512 16:15:47.093747 7 log.go:172] (0xc0043ba4d0) (0xc002680640) Create stream I0512 16:15:47.093759 7 log.go:172] (0xc0043ba4d0) (0xc002680640) Stream added, broadcasting: 3 I0512 16:15:47.094514 7 log.go:172] (0xc0043ba4d0) Reply frame received for 3 I0512 16:15:47.094543 7 log.go:172] (0xc0043ba4d0) (0xc0026806e0) Create stream I0512 16:15:47.094557 7 log.go:172] (0xc0043ba4d0) (0xc0026806e0) Stream added, broadcasting: 5 I0512 16:15:47.095295 7 log.go:172] (0xc0043ba4d0) Reply frame received for 5 I0512 16:15:48.153918 7 log.go:172] (0xc0043ba4d0) Data frame received for 5 I0512 16:15:48.153952 7 log.go:172] (0xc0026806e0) (5) Data frame handling I0512 16:15:48.153973 7 log.go:172] (0xc0043ba4d0) Data frame received for 3 I0512 16:15:48.154005 7 log.go:172] (0xc002680640) (3) Data frame handling I0512 16:15:48.154028 7 log.go:172] (0xc002680640) (3) Data frame sent I0512 16:15:48.154092 7 log.go:172] (0xc0043ba4d0) Data frame received for 3 I0512 16:15:48.154117 7 log.go:172] (0xc002680640) (3) Data frame handling I0512 16:15:48.155717 7 log.go:172] (0xc0043ba4d0) Data frame received for 1 I0512 16:15:48.155750 7 log.go:172] (0xc0026805a0) (1) Data frame handling I0512 16:15:48.155769 7 log.go:172] (0xc0026805a0) (1) Data frame sent I0512 16:15:48.155893 7 log.go:172] (0xc0043ba4d0) (0xc0026805a0) Stream removed, broadcasting: 1 I0512 16:15:48.155944 7 log.go:172] (0xc0043ba4d0) Go away received I0512 16:15:48.156135 7 log.go:172] (0xc0043ba4d0) (0xc0026805a0) Stream removed, broadcasting: 1 I0512 16:15:48.156152 7 log.go:172] (0xc0043ba4d0) (0xc002680640) Stream removed, broadcasting: 3 I0512 16:15:48.156161 7 log.go:172] (0xc0043ba4d0) (0xc0026806e0) Stream removed, broadcasting: 5 May 12 16:15:48.156: INFO: Found all expected endpoints: [netserver-0] May 12 16:15:48.353: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.186 8081 | grep -v '^\s*$'] Namespace:pod-network-test-4799 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 16:15:48.353: INFO: >>> kubeConfig: /root/.kube/config I0512 16:15:48.435567 7 log.go:172] (0xc003500fd0) (0xc0022881e0) Create stream I0512 16:15:48.435595 7 log.go:172] (0xc003500fd0) (0xc0022881e0) Stream added, broadcasting: 1 I0512 16:15:48.437616 7 log.go:172] (0xc003500fd0) Reply frame received for 1 I0512 16:15:48.437646 7 log.go:172] (0xc003500fd0) (0xc002288320) Create stream I0512 16:15:48.437657 7 log.go:172] (0xc003500fd0) (0xc002288320) Stream added, broadcasting: 3 I0512 16:15:48.438468 7 log.go:172] (0xc003500fd0) Reply frame received for 3 I0512 16:15:48.438494 7 log.go:172] (0xc003500fd0) (0xc0022883c0) Create stream I0512 16:15:48.438505 7 log.go:172] (0xc003500fd0) (0xc0022883c0) Stream added, broadcasting: 5 I0512 16:15:48.439201 7 log.go:172] (0xc003500fd0) Reply frame received for 5 I0512 16:15:49.506325 7 log.go:172] (0xc003500fd0) Data frame received for 5 I0512 16:15:49.506366 7 log.go:172] (0xc0022883c0) (5) Data frame handling I0512 16:15:49.506401 7 log.go:172] (0xc003500fd0) Data frame received for 3 I0512 16:15:49.506423 7 log.go:172] (0xc002288320) (3) Data frame handling I0512 16:15:49.506447 7 log.go:172] (0xc002288320) (3) Data frame sent I0512 16:15:49.506467 7 log.go:172] (0xc003500fd0) Data frame received for 3 I0512 16:15:49.506486 7 log.go:172] (0xc002288320) (3) Data frame handling I0512 16:15:49.507721 7 log.go:172] (0xc003500fd0) Data frame received for 1 I0512 16:15:49.507744 7 log.go:172] (0xc0022881e0) (1) Data frame handling I0512 16:15:49.507754 7 log.go:172] (0xc0022881e0) (1) Data frame sent I0512 16:15:49.507763 7 log.go:172] (0xc003500fd0) (0xc0022881e0) Stream removed, broadcasting: 1 I0512 16:15:49.507832 7 log.go:172] (0xc003500fd0) (0xc0022881e0) Stream removed, broadcasting: 1 I0512 16:15:49.507847 7 log.go:172] (0xc003500fd0) (0xc002288320) Stream removed, broadcasting: 3 I0512 16:15:49.507858 7 log.go:172] (0xc003500fd0) (0xc0022883c0) Stream removed, broadcasting: 5 May 12 16:15:49.507: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:15:49.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0512 16:15:49.508110 7 log.go:172] (0xc003500fd0) Go away received STEP: Destroying namespace "pod-network-test-4799" for this suite. • [SLOW TEST:31.057 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":169,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:15:49.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 12 16:15:50.303: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 16:15:50.358: INFO: Number of nodes with available pods: 0 May 12 16:15:50.358: INFO: Node jerma-worker is running more than one daemon pod May 12 16:15:51.543: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 16:15:51.712: INFO: Number of nodes with available pods: 0 May 12 16:15:51.712: INFO: Node jerma-worker is running more than one daemon pod May 12 16:15:52.479: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 16:15:52.500: INFO: Number of nodes with available pods: 0 May 12 16:15:52.500: INFO: Node jerma-worker is running more than one daemon pod May 12 16:15:53.527: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 16:15:53.820: INFO: Number of nodes with available pods: 0 May 12 16:15:53.820: INFO: Node jerma-worker is running more than one daemon pod May 12 16:15:54.420: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 16:15:54.778: INFO: Number of nodes with available pods: 0 May 12 16:15:54.778: INFO: Node jerma-worker is running more than one daemon pod May 12 16:15:55.899: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 16:15:56.185: INFO: Number of nodes with available pods: 0 May 12 16:15:56.185: INFO: Node jerma-worker is running more than one daemon pod May 12 16:15:56.902: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 16:15:57.264: INFO: Number of nodes with available pods: 1 May 12 16:15:57.264: INFO: Node jerma-worker2 is running more than one daemon pod May 12 16:15:57.630: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 16:15:58.017: INFO: Number of nodes with available pods: 1 May 12 16:15:58.017: INFO: Node jerma-worker2 is running more than one daemon pod May 12 16:15:58.742: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 16:15:58.782: INFO: Number of nodes with available pods: 2 May 12 16:15:58.782: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 12 16:15:59.258: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 16:15:59.522: INFO: Number of nodes with available pods: 1 May 12 16:15:59.522: INFO: Node jerma-worker is running more than one daemon pod May 12 16:16:00.647: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 16:16:00.734: INFO: Number of nodes with available pods: 1 May 12 16:16:00.734: INFO: Node jerma-worker is running more than one daemon pod May 12 16:16:01.773: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 16:16:01.862: INFO: Number of nodes with available pods: 1 May 12 16:16:01.862: INFO: Node jerma-worker is running more than one daemon pod May 12 16:16:02.840: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 16:16:02.875: INFO: Number of nodes with available pods: 1 May 12 16:16:02.875: INFO: Node jerma-worker is running more than one daemon pod May 12 16:16:03.533: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 16:16:03.545: INFO: Number of nodes with available pods: 1 May 12 16:16:03.545: INFO: Node jerma-worker is running more than one daemon pod May 12 16:16:04.528: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 16:16:04.532: INFO: Number of nodes with available pods: 1 May 12 16:16:04.532: INFO: Node jerma-worker is running more than one daemon pod May 12 16:16:05.978: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 16:16:06.049: INFO: Number of nodes with available pods: 2 May 12 16:16:06.049: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1370, will wait for the garbage collector to delete the pods May 12 16:16:06.333: INFO: Deleting DaemonSet.extensions daemon-set took: 7.079765ms May 12 16:16:06.833: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.228892ms May 12 16:16:19.536: INFO: Number of nodes with available pods: 0 May 12 16:16:19.536: INFO: Number of running nodes: 0, number of available pods: 0 May 12 16:16:19.542: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1370/daemonsets","resourceVersion":"15607332"},"items":null} May 12 16:16:19.545: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1370/pods","resourceVersion":"15607332"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:16:19.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1370" for this suite. • [SLOW TEST:30.018 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":10,"skipped":201,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:16:19.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container May 12 16:16:26.203: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2483 pod-service-account-0cb66423-ac28-46e7-abe4-fbc8d58d3a6c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 12 16:16:26.395: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2483 pod-service-account-0cb66423-ac28-46e7-abe4-fbc8d58d3a6c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 12 16:16:26.633: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2483 pod-service-account-0cb66423-ac28-46e7-abe4-fbc8d58d3a6c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:16:26.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2483" for this suite. • [SLOW TEST:7.560 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":11,"skipped":244,"failed":0} SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:16:27.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 16:16:27.963: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 70.33528ms) May 12 16:16:27.976: INFO: (1) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 12.025416ms) May 12 16:16:28.289: INFO: (2) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 313.289204ms) May 12 16:16:28.294: INFO: (3) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.846345ms) May 12 16:16:28.297: INFO: (4) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.562986ms) May 12 16:16:28.300: INFO: (5) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.684603ms) May 12 16:16:28.304: INFO: (6) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.776646ms) May 12 16:16:28.307: INFO: (7) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.212169ms) May 12 16:16:28.310: INFO: (8) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.856917ms) May 12 16:16:28.313: INFO: (9) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.759871ms) May 12 16:16:28.315: INFO: (10) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.662103ms) May 12 16:16:28.319: INFO: (11) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.094218ms) May 12 16:16:28.322: INFO: (12) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.170283ms) May 12 16:16:28.325: INFO: (13) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.510315ms) May 12 16:16:28.329: INFO: (14) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.14719ms) May 12 16:16:28.332: INFO: (15) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.538029ms) May 12 16:16:28.336: INFO: (16) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.50797ms) May 12 16:16:28.339: INFO: (17) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.657907ms) May 12 16:16:28.343: INFO: (18) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.890211ms) May 12 16:16:28.347: INFO: (19) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.55829ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:16:28.347: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5568" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":12,"skipped":251,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:16:28.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command May 12 16:16:29.535: INFO: Waiting up to 5m0s for pod "client-containers-b8c1d4c2-cd0e-478c-88fc-d537a9d4c56d" in namespace "containers-3658" to be "success or failure" May 12 16:16:29.929: INFO: Pod "client-containers-b8c1d4c2-cd0e-478c-88fc-d537a9d4c56d": Phase="Pending", Reason="", readiness=false. Elapsed: 394.024748ms May 12 16:16:31.946: INFO: Pod "client-containers-b8c1d4c2-cd0e-478c-88fc-d537a9d4c56d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.410470175s May 12 16:16:34.132: INFO: Pod "client-containers-b8c1d4c2-cd0e-478c-88fc-d537a9d4c56d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.596852325s May 12 16:16:36.161: INFO: Pod "client-containers-b8c1d4c2-cd0e-478c-88fc-d537a9d4c56d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.62545318s May 12 16:16:39.079: INFO: Pod "client-containers-b8c1d4c2-cd0e-478c-88fc-d537a9d4c56d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.543429808s May 12 16:16:41.082: INFO: Pod "client-containers-b8c1d4c2-cd0e-478c-88fc-d537a9d4c56d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.546759618s STEP: Saw pod success May 12 16:16:41.082: INFO: Pod "client-containers-b8c1d4c2-cd0e-478c-88fc-d537a9d4c56d" satisfied condition "success or failure" May 12 16:16:41.084: INFO: Trying to get logs from node jerma-worker pod client-containers-b8c1d4c2-cd0e-478c-88fc-d537a9d4c56d container test-container: STEP: delete the pod May 12 16:16:41.379: INFO: Waiting for pod client-containers-b8c1d4c2-cd0e-478c-88fc-d537a9d4c56d to disappear May 12 16:16:41.941: INFO: Pod client-containers-b8c1d4c2-cd0e-478c-88fc-d537a9d4c56d no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:16:41.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3658" for this suite. • [SLOW TEST:14.048 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":13,"skipped":263,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:16:42.402: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-744.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-744.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-744.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-744.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-744.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-744.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-744.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-744.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-744.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-744.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-744.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 43.120.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.120.43_udp@PTR;check="$$(dig +tcp +noall +answer +search 43.120.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.120.43_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-744.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-744.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-744.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-744.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-744.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-744.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-744.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-744.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-744.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-744.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-744.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 43.120.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.120.43_udp@PTR;check="$$(dig +tcp +noall +answer +search 43.120.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.120.43_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 16:16:54.574: INFO: Unable to read wheezy_udp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:16:54.577: INFO: Unable to read wheezy_tcp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:16:54.580: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:16:54.582: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:16:54.598: INFO: Unable to read jessie_udp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:16:54.601: INFO: Unable to read jessie_tcp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:16:54.603: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:16:54.606: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:16:54.621: INFO: Lookups using dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0 failed for: [wheezy_udp@dns-test-service.dns-744.svc.cluster.local wheezy_tcp@dns-test-service.dns-744.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local jessie_udp@dns-test-service.dns-744.svc.cluster.local jessie_tcp@dns-test-service.dns-744.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local] May 12 16:16:59.677: INFO: Unable to read wheezy_udp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:16:59.679: INFO: Unable to read wheezy_tcp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:16:59.681: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:16:59.683: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:16:59.697: INFO: Unable to read jessie_udp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:16:59.699: INFO: Unable to read jessie_tcp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:16:59.701: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:16:59.703: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:16:59.719: INFO: Lookups using dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0 failed for: [wheezy_udp@dns-test-service.dns-744.svc.cluster.local wheezy_tcp@dns-test-service.dns-744.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local jessie_udp@dns-test-service.dns-744.svc.cluster.local jessie_tcp@dns-test-service.dns-744.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local] May 12 16:17:05.001: INFO: Unable to read wheezy_udp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:05.003: INFO: Unable to read wheezy_tcp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:05.020: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:05.068: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:05.423: INFO: Unable to read jessie_udp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:05.450: INFO: Unable to read jessie_tcp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:05.452: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:05.456: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:05.660: INFO: Lookups using dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0 failed for: [wheezy_udp@dns-test-service.dns-744.svc.cluster.local wheezy_tcp@dns-test-service.dns-744.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local jessie_udp@dns-test-service.dns-744.svc.cluster.local jessie_tcp@dns-test-service.dns-744.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local] May 12 16:17:09.750: INFO: Unable to read wheezy_udp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:09.754: INFO: Unable to read wheezy_tcp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:09.757: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:09.760: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:10.029: INFO: Unable to read jessie_udp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:10.031: INFO: Unable to read jessie_tcp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:10.034: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:10.036: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:10.051: INFO: Lookups using dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0 failed for: [wheezy_udp@dns-test-service.dns-744.svc.cluster.local wheezy_tcp@dns-test-service.dns-744.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local jessie_udp@dns-test-service.dns-744.svc.cluster.local jessie_tcp@dns-test-service.dns-744.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local] May 12 16:17:14.624: INFO: Unable to read wheezy_udp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:14.626: INFO: Unable to read wheezy_tcp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:14.628: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:14.630: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:14.644: INFO: Unable to read jessie_udp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:14.646: INFO: Unable to read jessie_tcp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:14.648: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:14.650: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:14.662: INFO: Lookups using dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0 failed for: [wheezy_udp@dns-test-service.dns-744.svc.cluster.local wheezy_tcp@dns-test-service.dns-744.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local jessie_udp@dns-test-service.dns-744.svc.cluster.local jessie_tcp@dns-test-service.dns-744.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local] May 12 16:17:19.626: INFO: Unable to read wheezy_udp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:19.629: INFO: Unable to read wheezy_tcp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:19.632: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:19.634: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:19.652: INFO: Unable to read jessie_udp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:19.654: INFO: Unable to read jessie_tcp@dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:19.656: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:19.659: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local from pod dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0: the server could not find the requested resource (get pods dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0) May 12 16:17:19.679: INFO: Lookups using dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0 failed for: [wheezy_udp@dns-test-service.dns-744.svc.cluster.local wheezy_tcp@dns-test-service.dns-744.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local jessie_udp@dns-test-service.dns-744.svc.cluster.local jessie_tcp@dns-test-service.dns-744.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-744.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-744.svc.cluster.local] May 12 16:17:24.820: INFO: DNS probes using dns-744/dns-test-ca1e2492-a1e9-4aa9-9cfc-1d47eaf748a0 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:17:27.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-744" for this suite. • [SLOW TEST:45.173 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":14,"skipped":327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:17:27.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 12 16:17:28.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8907' May 12 16:17:30.154: INFO: stderr: "" May 12 16:17:30.154: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 16:17:30.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8907' May 12 16:17:30.295: INFO: stderr: "" May 12 16:17:30.295: INFO: stdout: "update-demo-nautilus-2bxtr update-demo-nautilus-ljqbg " May 12 16:17:30.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2bxtr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8907' May 12 16:17:30.476: INFO: stderr: "" May 12 16:17:30.476: INFO: stdout: "" May 12 16:17:30.476: INFO: update-demo-nautilus-2bxtr is created but not running May 12 16:17:35.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8907' May 12 16:17:35.996: INFO: stderr: "" May 12 16:17:35.996: INFO: stdout: "update-demo-nautilus-2bxtr update-demo-nautilus-ljqbg " May 12 16:17:35.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2bxtr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8907' May 12 16:17:36.462: INFO: stderr: "" May 12 16:17:36.462: INFO: stdout: "" May 12 16:17:36.462: INFO: update-demo-nautilus-2bxtr is created but not running May 12 16:17:41.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8907' May 12 16:17:41.563: INFO: stderr: "" May 12 16:17:41.564: INFO: stdout: "update-demo-nautilus-2bxtr update-demo-nautilus-ljqbg " May 12 16:17:41.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2bxtr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8907' May 12 16:17:41.650: INFO: stderr: "" May 12 16:17:41.650: INFO: stdout: "true" May 12 16:17:41.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2bxtr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8907' May 12 16:17:41.749: INFO: stderr: "" May 12 16:17:41.750: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 16:17:41.750: INFO: validating pod update-demo-nautilus-2bxtr May 12 16:17:41.753: INFO: got data: { "image": "nautilus.jpg" } May 12 16:17:41.754: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 16:17:41.754: INFO: update-demo-nautilus-2bxtr is verified up and running May 12 16:17:41.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ljqbg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8907' May 12 16:17:41.839: INFO: stderr: "" May 12 16:17:41.839: INFO: stdout: "true" May 12 16:17:41.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ljqbg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8907' May 12 16:17:41.913: INFO: stderr: "" May 12 16:17:41.914: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 16:17:41.914: INFO: validating pod update-demo-nautilus-ljqbg May 12 16:17:41.917: INFO: got data: { "image": "nautilus.jpg" } May 12 16:17:41.917: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 16:17:41.917: INFO: update-demo-nautilus-ljqbg is verified up and running STEP: scaling down the replication controller May 12 16:17:41.920: INFO: scanned /root for discovery docs: May 12 16:17:41.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8907' May 12 16:17:43.564: INFO: stderr: "" May 12 16:17:43.564: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 16:17:43.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8907' May 12 16:17:43.664: INFO: stderr: "" May 12 16:17:43.664: INFO: stdout: "update-demo-nautilus-2bxtr update-demo-nautilus-ljqbg " STEP: Replicas for name=update-demo: expected=1 actual=2 May 12 16:17:48.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8907' May 12 16:17:48.775: INFO: stderr: "" May 12 16:17:48.775: INFO: stdout: "update-demo-nautilus-ljqbg " May 12 16:17:48.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ljqbg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8907' May 12 16:17:49.008: INFO: stderr: "" May 12 16:17:49.008: INFO: stdout: "true" May 12 16:17:49.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ljqbg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8907' May 12 16:17:49.099: INFO: stderr: "" May 12 16:17:49.099: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 16:17:49.099: INFO: validating pod update-demo-nautilus-ljqbg May 12 16:17:49.102: INFO: got data: { "image": "nautilus.jpg" } May 12 16:17:49.102: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 16:17:49.102: INFO: update-demo-nautilus-ljqbg is verified up and running STEP: scaling up the replication controller May 12 16:17:49.104: INFO: scanned /root for discovery docs: May 12 16:17:49.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8907' May 12 16:17:50.343: INFO: stderr: "" May 12 16:17:50.343: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 16:17:50.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8907' May 12 16:17:50.431: INFO: stderr: "" May 12 16:17:50.431: INFO: stdout: "update-demo-nautilus-ljqbg update-demo-nautilus-lxtp4 " May 12 16:17:50.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ljqbg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8907' May 12 16:17:50.513: INFO: stderr: "" May 12 16:17:50.513: INFO: stdout: "true" May 12 16:17:50.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ljqbg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8907' May 12 16:17:50.608: INFO: stderr: "" May 12 16:17:50.608: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 16:17:50.608: INFO: validating pod update-demo-nautilus-ljqbg May 12 16:17:50.611: INFO: got data: { "image": "nautilus.jpg" } May 12 16:17:50.611: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 16:17:50.611: INFO: update-demo-nautilus-ljqbg is verified up and running May 12 16:17:50.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lxtp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8907' May 12 16:17:50.808: INFO: stderr: "" May 12 16:17:50.808: INFO: stdout: "" May 12 16:17:50.808: INFO: update-demo-nautilus-lxtp4 is created but not running May 12 16:17:55.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8907' May 12 16:17:55.913: INFO: stderr: "" May 12 16:17:55.913: INFO: stdout: "update-demo-nautilus-ljqbg update-demo-nautilus-lxtp4 " May 12 16:17:55.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ljqbg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8907' May 12 16:17:56.007: INFO: stderr: "" May 12 16:17:56.007: INFO: stdout: "true" May 12 16:17:56.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ljqbg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8907' May 12 16:17:56.107: INFO: stderr: "" May 12 16:17:56.107: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 16:17:56.107: INFO: validating pod update-demo-nautilus-ljqbg May 12 16:17:56.110: INFO: got data: { "image": "nautilus.jpg" } May 12 16:17:56.110: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 16:17:56.110: INFO: update-demo-nautilus-ljqbg is verified up and running May 12 16:17:56.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lxtp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8907' May 12 16:17:56.200: INFO: stderr: "" May 12 16:17:56.200: INFO: stdout: "true" May 12 16:17:56.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lxtp4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8907' May 12 16:17:56.291: INFO: stderr: "" May 12 16:17:56.291: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 16:17:56.291: INFO: validating pod update-demo-nautilus-lxtp4 May 12 16:17:56.294: INFO: got data: { "image": "nautilus.jpg" } May 12 16:17:56.294: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 16:17:56.294: INFO: update-demo-nautilus-lxtp4 is verified up and running STEP: using delete to clean up resources May 12 16:17:56.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8907' May 12 16:17:56.441: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 16:17:56.441: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 12 16:17:56.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8907' May 12 16:17:56.536: INFO: stderr: "No resources found in kubectl-8907 namespace.\n" May 12 16:17:56.536: INFO: stdout: "" May 12 16:17:56.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8907 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 16:17:56.628: INFO: stderr: "" May 12 16:17:56.628: INFO: stdout: "update-demo-nautilus-ljqbg\nupdate-demo-nautilus-lxtp4\n" May 12 16:17:57.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8907' May 12 16:17:58.202: INFO: stderr: "No resources found in kubectl-8907 namespace.\n" May 12 16:17:58.202: INFO: stdout: "" May 12 16:17:58.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8907 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 16:17:58.291: INFO: stderr: "" May 12 16:17:58.291: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:17:58.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8907" for this suite. • [SLOW TEST:30.721 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":15,"skipped":364,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:17:58.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 12 16:18:00.469: INFO: Waiting up to 5m0s for pod "pod-fdc8175c-9eb8-4411-823b-749f1149e256" in namespace "emptydir-4878" to be "success or failure" May 12 16:18:00.534: INFO: Pod "pod-fdc8175c-9eb8-4411-823b-749f1149e256": Phase="Pending", Reason="", readiness=false. Elapsed: 65.16757ms May 12 16:18:02.642: INFO: Pod "pod-fdc8175c-9eb8-4411-823b-749f1149e256": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173013812s May 12 16:18:04.706: INFO: Pod "pod-fdc8175c-9eb8-4411-823b-749f1149e256": Phase="Pending", Reason="", readiness=false. Elapsed: 4.236909116s May 12 16:18:06.930: INFO: Pod "pod-fdc8175c-9eb8-4411-823b-749f1149e256": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.460522156s STEP: Saw pod success May 12 16:18:06.930: INFO: Pod "pod-fdc8175c-9eb8-4411-823b-749f1149e256" satisfied condition "success or failure" May 12 16:18:06.959: INFO: Trying to get logs from node jerma-worker2 pod pod-fdc8175c-9eb8-4411-823b-749f1149e256 container test-container: STEP: delete the pod May 12 16:18:07.744: INFO: Waiting for pod pod-fdc8175c-9eb8-4411-823b-749f1149e256 to disappear May 12 16:18:07.918: INFO: Pod pod-fdc8175c-9eb8-4411-823b-749f1149e256 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:18:07.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4878" for this suite. • [SLOW TEST:9.859 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":369,"failed":0} SS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:18:08.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:18:16.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2817" for this suite. • [SLOW TEST:8.416 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":17,"skipped":371,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:18:16.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-548 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-548 STEP: Creating statefulset with conflicting port in namespace statefulset-548 STEP: Waiting until pod test-pod will start running in namespace statefulset-548 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-548 May 12 16:18:22.760: INFO: Observed stateful pod in namespace: statefulset-548, name: ss-0, uid: f0bedc18-5883-4b59-a66c-88b8eaf50fd9, status phase: Pending. Waiting for statefulset controller to delete. May 12 16:18:22.929: INFO: Observed stateful pod in namespace: statefulset-548, name: ss-0, uid: f0bedc18-5883-4b59-a66c-88b8eaf50fd9, status phase: Failed. Waiting for statefulset controller to delete. May 12 16:18:22.935: INFO: Observed stateful pod in namespace: statefulset-548, name: ss-0, uid: f0bedc18-5883-4b59-a66c-88b8eaf50fd9, status phase: Failed. Waiting for statefulset controller to delete. May 12 16:18:22.938: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-548 STEP: Removing pod with conflicting port in namespace statefulset-548 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-548 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 12 16:18:29.463: INFO: Deleting all statefulset in ns statefulset-548 May 12 16:18:29.465: INFO: Scaling statefulset ss to 0 May 12 16:18:39.523: INFO: Waiting for statefulset status.replicas updated to 0 May 12 16:18:39.526: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:18:39.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-548" for this suite. • [SLOW TEST:23.103 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":18,"skipped":373,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:18:39.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 12 16:18:39.878: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:18:55.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6688" for this suite. • [SLOW TEST:15.761 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":19,"skipped":387,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:18:55.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 12 16:18:55.545: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 16:18:55.575: INFO: Waiting for terminating namespaces to be deleted... May 12 16:18:55.578: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 12 16:18:55.595: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:18:55.595: INFO: Container kindnet-cni ready: true, restart count 0 May 12 16:18:55.595: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:18:55.595: INFO: Container kube-proxy ready: true, restart count 0 May 12 16:18:55.595: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 12 16:18:55.602: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 12 16:18:55.602: INFO: Container kube-hunter ready: false, restart count 0 May 12 16:18:55.602: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:18:55.602: INFO: Container kindnet-cni ready: true, restart count 0 May 12 16:18:55.602: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 12 16:18:55.602: INFO: Container kube-bench ready: false, restart count 0 May 12 16:18:55.602: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:18:55.602: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160e543c1a2cba9c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:18:56.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2363" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":20,"skipped":394,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:18:56.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-sj5n STEP: Creating a pod to test atomic-volume-subpath May 12 16:18:57.665: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-sj5n" in namespace "subpath-7385" to be "success or failure" May 12 16:18:57.681: INFO: Pod "pod-subpath-test-downwardapi-sj5n": Phase="Pending", Reason="", readiness=false. Elapsed: 15.724086ms May 12 16:18:59.684: INFO: Pod "pod-subpath-test-downwardapi-sj5n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018809061s May 12 16:19:01.687: INFO: Pod "pod-subpath-test-downwardapi-sj5n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0212684s May 12 16:19:03.691: INFO: Pod "pod-subpath-test-downwardapi-sj5n": Phase="Running", Reason="", readiness=true. Elapsed: 6.025299148s May 12 16:19:05.723: INFO: Pod "pod-subpath-test-downwardapi-sj5n": Phase="Running", Reason="", readiness=true. Elapsed: 8.057348853s May 12 16:19:07.764: INFO: Pod "pod-subpath-test-downwardapi-sj5n": Phase="Running", Reason="", readiness=true. Elapsed: 10.098953521s May 12 16:19:09.768: INFO: Pod "pod-subpath-test-downwardapi-sj5n": Phase="Running", Reason="", readiness=true. Elapsed: 12.102678186s May 12 16:19:11.781: INFO: Pod "pod-subpath-test-downwardapi-sj5n": Phase="Running", Reason="", readiness=true. Elapsed: 14.115659009s May 12 16:19:13.984: INFO: Pod "pod-subpath-test-downwardapi-sj5n": Phase="Running", Reason="", readiness=true. Elapsed: 16.318619111s May 12 16:19:15.988: INFO: Pod "pod-subpath-test-downwardapi-sj5n": Phase="Running", Reason="", readiness=true. Elapsed: 18.322635905s May 12 16:19:17.991: INFO: Pod "pod-subpath-test-downwardapi-sj5n": Phase="Running", Reason="", readiness=true. Elapsed: 20.325997499s May 12 16:19:20.098: INFO: Pod "pod-subpath-test-downwardapi-sj5n": Phase="Running", Reason="", readiness=true. Elapsed: 22.432898984s May 12 16:19:22.102: INFO: Pod "pod-subpath-test-downwardapi-sj5n": Phase="Running", Reason="", readiness=true. Elapsed: 24.436452803s May 12 16:19:24.261: INFO: Pod "pod-subpath-test-downwardapi-sj5n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.596006467s STEP: Saw pod success May 12 16:19:24.261: INFO: Pod "pod-subpath-test-downwardapi-sj5n" satisfied condition "success or failure" May 12 16:19:24.264: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-sj5n container test-container-subpath-downwardapi-sj5n: STEP: delete the pod May 12 16:19:24.520: INFO: Waiting for pod pod-subpath-test-downwardapi-sj5n to disappear May 12 16:19:24.555: INFO: Pod pod-subpath-test-downwardapi-sj5n no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-sj5n May 12 16:19:24.555: INFO: Deleting pod "pod-subpath-test-downwardapi-sj5n" in namespace "subpath-7385" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:19:24.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7385" for this suite. • [SLOW TEST:27.628 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":21,"skipped":404,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:19:24.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 12 16:19:35.192: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 16:19:35.218: INFO: Pod pod-with-prestop-exec-hook still exists May 12 16:19:37.218: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 16:19:37.221: INFO: Pod pod-with-prestop-exec-hook still exists May 12 16:19:39.218: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 16:19:39.221: INFO: Pod pod-with-prestop-exec-hook still exists May 12 16:19:41.218: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 12 16:19:41.221: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:19:41.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5474" for this suite. • [SLOW TEST:16.667 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":431,"failed":0} SSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:19:41.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-7bc88caf-44ae-4963-9f4e-37faf4b4639f STEP: Creating a pod to test consume configMaps May 12 16:19:41.975: INFO: Waiting up to 5m0s for pod "pod-configmaps-31c70bc4-cc00-4fd6-bff7-056287cf3a13" in namespace "configmap-5420" to be "success or failure" May 12 16:19:42.310: INFO: Pod "pod-configmaps-31c70bc4-cc00-4fd6-bff7-056287cf3a13": Phase="Pending", Reason="", readiness=false. Elapsed: 335.269169ms May 12 16:19:44.314: INFO: Pod "pod-configmaps-31c70bc4-cc00-4fd6-bff7-056287cf3a13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339053864s May 12 16:19:46.322: INFO: Pod "pod-configmaps-31c70bc4-cc00-4fd6-bff7-056287cf3a13": Phase="Pending", Reason="", readiness=false. Elapsed: 4.34752542s May 12 16:19:48.404: INFO: Pod "pod-configmaps-31c70bc4-cc00-4fd6-bff7-056287cf3a13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.429315283s STEP: Saw pod success May 12 16:19:48.404: INFO: Pod "pod-configmaps-31c70bc4-cc00-4fd6-bff7-056287cf3a13" satisfied condition "success or failure" May 12 16:19:48.407: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-31c70bc4-cc00-4fd6-bff7-056287cf3a13 container configmap-volume-test: STEP: delete the pod May 12 16:19:49.274: INFO: Waiting for pod pod-configmaps-31c70bc4-cc00-4fd6-bff7-056287cf3a13 to disappear May 12 16:19:49.323: INFO: Pod pod-configmaps-31c70bc4-cc00-4fd6-bff7-056287cf3a13 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:19:49.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5420" for this suite. • [SLOW TEST:8.373 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":435,"failed":0} SS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:19:49.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 16:19:50.437: INFO: Creating deployment "test-recreate-deployment" May 12 16:19:50.766: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 12 16:19:51.219: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 12 16:19:53.658: INFO: Waiting deployment "test-recreate-deployment" to complete May 12 16:19:53.660: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897191, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897191, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897191, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897191, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 16:19:55.723: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897191, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897191, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897191, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897191, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 16:19:57.662: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 12 16:19:57.667: INFO: Updating deployment test-recreate-deployment May 12 16:19:57.667: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 12 16:19:58.664: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-5395 /apis/apps/v1/namespaces/deployment-5395/deployments/test-recreate-deployment 69aea504-224c-42d2-bb04-752b3f38c296 15608614 2 2020-05-12 16:19:50 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003faf928 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-12 16:19:58 +0000 UTC,LastTransitionTime:2020-05-12 16:19:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-05-12 16:19:58 +0000 UTC,LastTransitionTime:2020-05-12 16:19:51 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 12 16:19:58.696: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-5395 /apis/apps/v1/namespaces/deployment-5395/replicasets/test-recreate-deployment-5f94c574ff 4e7b2fb1-dcaf-46e1-acf4-90a8b1336c13 15608612 1 2020-05-12 16:19:58 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 69aea504-224c-42d2-bb04-752b3f38c296 0xc003f585c7 0xc003f585c8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003f58648 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 12 16:19:58.696: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 12 16:19:58.696: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-5395 /apis/apps/v1/namespaces/deployment-5395/replicasets/test-recreate-deployment-799c574856 ce11f2fa-5c1f-41c8-85a3-6dbc0fbb349a 15608601 2 2020-05-12 16:19:50 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 69aea504-224c-42d2-bb04-752b3f38c296 0xc003f586e7 0xc003f586e8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003f587e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 12 16:19:58.712: INFO: Pod "test-recreate-deployment-5f94c574ff-btlr2" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-btlr2 test-recreate-deployment-5f94c574ff- deployment-5395 /api/v1/namespaces/deployment-5395/pods/test-recreate-deployment-5f94c574ff-btlr2 6c459fe6-3818-4e7a-8826-1dc11a4401dc 15608615 0 2020-05-12 16:19:58 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 4e7b2fb1-dcaf-46e1-acf4-90a8b1336c13 0xc003f59147 0xc003f59148}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d5tlx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d5tlx,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d5tlx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 16:19:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 16:19:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 16:19:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 16:19:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-12 16:19:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:19:58.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5395" for this suite. • [SLOW TEST:9.114 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":24,"skipped":437,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:19:58.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 12 16:19:59.120: INFO: Waiting up to 5m0s for pod "downwardapi-volume-93c9f43e-f610-49ca-8e06-fb39554da787" in namespace "downward-api-96" to be "success or failure" May 12 16:19:59.124: INFO: Pod "downwardapi-volume-93c9f43e-f610-49ca-8e06-fb39554da787": Phase="Pending", Reason="", readiness=false. Elapsed: 4.593255ms May 12 16:20:01.183: INFO: Pod "downwardapi-volume-93c9f43e-f610-49ca-8e06-fb39554da787": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063070187s May 12 16:20:03.314: INFO: Pod "downwardapi-volume-93c9f43e-f610-49ca-8e06-fb39554da787": Phase="Pending", Reason="", readiness=false. Elapsed: 4.194530621s May 12 16:20:05.633: INFO: Pod "downwardapi-volume-93c9f43e-f610-49ca-8e06-fb39554da787": Phase="Pending", Reason="", readiness=false. Elapsed: 6.512996605s May 12 16:20:07.784: INFO: Pod "downwardapi-volume-93c9f43e-f610-49ca-8e06-fb39554da787": Phase="Running", Reason="", readiness=true. Elapsed: 8.664468984s May 12 16:20:09.814: INFO: Pod "downwardapi-volume-93c9f43e-f610-49ca-8e06-fb39554da787": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.694382576s STEP: Saw pod success May 12 16:20:09.814: INFO: Pod "downwardapi-volume-93c9f43e-f610-49ca-8e06-fb39554da787" satisfied condition "success or failure" May 12 16:20:10.123: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-93c9f43e-f610-49ca-8e06-fb39554da787 container client-container: STEP: delete the pod May 12 16:20:11.156: INFO: Waiting for pod downwardapi-volume-93c9f43e-f610-49ca-8e06-fb39554da787 to disappear May 12 16:20:11.167: INFO: Pod downwardapi-volume-93c9f43e-f610-49ca-8e06-fb39554da787 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:20:11.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-96" for this suite. • [SLOW TEST:12.514 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":467,"failed":0} [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:20:11.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-1205/configmap-test-5ae64c7e-e13c-47e2-bd9c-0d98b17e87d6 STEP: Creating a pod to test consume configMaps May 12 16:20:12.768: INFO: Waiting up to 5m0s for pod "pod-configmaps-0d7db22b-4c08-4826-a424-e427411cf66a" in namespace "configmap-1205" to be "success or failure" May 12 16:20:13.171: INFO: Pod "pod-configmaps-0d7db22b-4c08-4826-a424-e427411cf66a": Phase="Pending", Reason="", readiness=false. Elapsed: 403.149613ms May 12 16:20:15.174: INFO: Pod "pod-configmaps-0d7db22b-4c08-4826-a424-e427411cf66a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.4062131s May 12 16:20:17.176: INFO: Pod "pod-configmaps-0d7db22b-4c08-4826-a424-e427411cf66a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.408836206s May 12 16:20:19.305: INFO: Pod "pod-configmaps-0d7db22b-4c08-4826-a424-e427411cf66a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.537570565s STEP: Saw pod success May 12 16:20:19.305: INFO: Pod "pod-configmaps-0d7db22b-4c08-4826-a424-e427411cf66a" satisfied condition "success or failure" May 12 16:20:19.307: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-0d7db22b-4c08-4826-a424-e427411cf66a container env-test: STEP: delete the pod May 12 16:20:19.536: INFO: Waiting for pod pod-configmaps-0d7db22b-4c08-4826-a424-e427411cf66a to disappear May 12 16:20:19.539: INFO: Pod pod-configmaps-0d7db22b-4c08-4826-a424-e427411cf66a no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:20:19.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1205" for this suite. • [SLOW TEST:8.314 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":467,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:20:19.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-0139fa50-327f-4254-8d95-543c0f923d10 STEP: Creating a pod to test consume configMaps May 12 16:20:20.113: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-69e228e4-6940-45c1-af0c-d4cf45c6a355" in namespace "projected-6139" to be "success or failure" May 12 16:20:20.144: INFO: Pod "pod-projected-configmaps-69e228e4-6940-45c1-af0c-d4cf45c6a355": Phase="Pending", Reason="", readiness=false. Elapsed: 30.249811ms May 12 16:20:22.372: INFO: Pod "pod-projected-configmaps-69e228e4-6940-45c1-af0c-d4cf45c6a355": Phase="Pending", Reason="", readiness=false. Elapsed: 2.258250611s May 12 16:20:24.591: INFO: Pod "pod-projected-configmaps-69e228e4-6940-45c1-af0c-d4cf45c6a355": Phase="Pending", Reason="", readiness=false. Elapsed: 4.477376148s May 12 16:20:26.847: INFO: Pod "pod-projected-configmaps-69e228e4-6940-45c1-af0c-d4cf45c6a355": Phase="Running", Reason="", readiness=true. Elapsed: 6.733778941s May 12 16:20:28.851: INFO: Pod "pod-projected-configmaps-69e228e4-6940-45c1-af0c-d4cf45c6a355": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.73738644s STEP: Saw pod success May 12 16:20:28.851: INFO: Pod "pod-projected-configmaps-69e228e4-6940-45c1-af0c-d4cf45c6a355" satisfied condition "success or failure" May 12 16:20:28.854: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-69e228e4-6940-45c1-af0c-d4cf45c6a355 container projected-configmap-volume-test: STEP: delete the pod May 12 16:20:28.894: INFO: Waiting for pod pod-projected-configmaps-69e228e4-6940-45c1-af0c-d4cf45c6a355 to disappear May 12 16:20:28.981: INFO: Pod pod-projected-configmaps-69e228e4-6940-45c1-af0c-d4cf45c6a355 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:20:28.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6139" for this suite. • [SLOW TEST:9.506 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":522,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:20:29.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 12 16:20:29.152: INFO: Waiting up to 5m0s for pod "pod-fb24886a-a918-41d8-bc79-4d366b06be2f" in namespace "emptydir-287" to be "success or failure" May 12 16:20:29.155: INFO: Pod "pod-fb24886a-a918-41d8-bc79-4d366b06be2f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.670625ms May 12 16:20:31.160: INFO: Pod "pod-fb24886a-a918-41d8-bc79-4d366b06be2f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008222696s May 12 16:20:33.200: INFO: Pod "pod-fb24886a-a918-41d8-bc79-4d366b06be2f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048490241s May 12 16:20:35.368: INFO: Pod "pod-fb24886a-a918-41d8-bc79-4d366b06be2f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.21634822s STEP: Saw pod success May 12 16:20:35.368: INFO: Pod "pod-fb24886a-a918-41d8-bc79-4d366b06be2f" satisfied condition "success or failure" May 12 16:20:35.370: INFO: Trying to get logs from node jerma-worker2 pod pod-fb24886a-a918-41d8-bc79-4d366b06be2f container test-container: STEP: delete the pod May 12 16:20:35.741: INFO: Waiting for pod pod-fb24886a-a918-41d8-bc79-4d366b06be2f to disappear May 12 16:20:35.816: INFO: Pod pod-fb24886a-a918-41d8-bc79-4d366b06be2f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:20:35.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-287" for this suite. • [SLOW TEST:7.272 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":590,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:20:36.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-9a2b4ea3-8c75-447d-9c31-f60dae67829f STEP: Creating a pod to test consume configMaps May 12 16:20:37.306: INFO: Waiting up to 5m0s for pod "pod-configmaps-7dacc7ab-e6d8-4123-985d-cd2db3c28f4d" in namespace "configmap-6036" to be "success or failure" May 12 16:20:37.509: INFO: Pod "pod-configmaps-7dacc7ab-e6d8-4123-985d-cd2db3c28f4d": Phase="Pending", Reason="", readiness=false. Elapsed: 203.390547ms May 12 16:20:39.584: INFO: Pod "pod-configmaps-7dacc7ab-e6d8-4123-985d-cd2db3c28f4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.278523031s May 12 16:20:41.770: INFO: Pod "pod-configmaps-7dacc7ab-e6d8-4123-985d-cd2db3c28f4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.464496572s May 12 16:20:44.237: INFO: Pod "pod-configmaps-7dacc7ab-e6d8-4123-985d-cd2db3c28f4d": Phase="Running", Reason="", readiness=true. Elapsed: 6.931335424s May 12 16:20:46.240: INFO: Pod "pod-configmaps-7dacc7ab-e6d8-4123-985d-cd2db3c28f4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.933794766s STEP: Saw pod success May 12 16:20:46.240: INFO: Pod "pod-configmaps-7dacc7ab-e6d8-4123-985d-cd2db3c28f4d" satisfied condition "success or failure" May 12 16:20:46.243: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-7dacc7ab-e6d8-4123-985d-cd2db3c28f4d container configmap-volume-test: STEP: delete the pod May 12 16:20:46.485: INFO: Waiting for pod pod-configmaps-7dacc7ab-e6d8-4123-985d-cd2db3c28f4d to disappear May 12 16:20:46.644: INFO: Pod pod-configmaps-7dacc7ab-e6d8-4123-985d-cd2db3c28f4d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:20:46.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6036" for this suite. • [SLOW TEST:10.323 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":608,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:20:46.651: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0512 16:21:28.948377 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 16:21:28.948: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:21:28.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5558" for this suite. • [SLOW TEST:42.303 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":30,"skipped":630,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:21:28.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod May 12 16:21:31.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-659 -- logs-generator --log-lines-total 100 --run-duration 20s' May 12 16:21:32.395: INFO: stderr: "" May 12 16:21:32.395: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. May 12 16:21:32.395: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 12 16:21:32.395: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-659" to be "running and ready, or succeeded" May 12 16:21:32.451: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 55.443233ms May 12 16:21:35.330: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.93520705s May 12 16:21:37.432: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 5.037387631s May 12 16:21:39.723: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.327556834s May 12 16:21:42.058: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 9.662844894s May 12 16:21:42.058: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 12 16:21:42.058: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 12 16:21:42.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-659' May 12 16:21:42.588: INFO: stderr: "" May 12 16:21:42.588: INFO: stdout: "I0512 16:21:39.814358 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/d94 439\nI0512 16:21:40.014625 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/7kks 540\nI0512 16:21:40.214538 1 logs_generator.go:76] 2 GET /api/v1/namespaces/kube-system/pods/lhkb 244\nI0512 16:21:40.414531 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/n4n2 556\nI0512 16:21:40.614502 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/nz4n 561\nI0512 16:21:40.814520 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/f4cg 414\nI0512 16:21:41.014523 1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/rct 290\nI0512 16:21:41.214526 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/gx4 369\nI0512 16:21:41.414544 1 logs_generator.go:76] 8 GET /api/v1/namespaces/kube-system/pods/pqwp 545\nI0512 16:21:41.614549 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/hfn 551\nI0512 16:21:41.814560 1 logs_generator.go:76] 10 POST /api/v1/namespaces/default/pods/7nx 319\nI0512 16:21:42.014593 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/h9b 319\nI0512 16:21:42.214519 1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/w2nk 322\nI0512 16:21:42.414576 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/xcv 460\n" STEP: limiting log lines May 12 16:21:42.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-659 --tail=1' May 12 16:21:43.240: INFO: stderr: "" May 12 16:21:43.240: INFO: stdout: "I0512 16:21:43.214514 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/bdj 587\n" May 12 16:21:43.240: INFO: got output "I0512 16:21:43.214514 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/bdj 587\n" STEP: limiting log bytes May 12 16:21:43.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-659 --limit-bytes=1' May 12 16:21:43.478: INFO: stderr: "" May 12 16:21:43.478: INFO: stdout: "I" May 12 16:21:43.478: INFO: got output "I" STEP: exposing timestamps May 12 16:21:43.478: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-659 --tail=1 --timestamps' May 12 16:21:43.914: INFO: stderr: "" May 12 16:21:43.914: INFO: stdout: "2020-05-12T16:21:43.614663623Z I0512 16:21:43.614506 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/mnj 208\n2020-05-12T16:21:43.814759192Z I0512 16:21:43.814571 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/jmw 579\n" May 12 16:21:43.914: INFO: got output "2020-05-12T16:21:43.614663623Z I0512 16:21:43.614506 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/mnj 208\n2020-05-12T16:21:43.814759192Z I0512 16:21:43.814571 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/jmw 579\n" May 12 16:21:43.914: FAIL: Expected : 2 to equal : 1 [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 May 12 16:21:43.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-659' May 12 16:21:59.547: INFO: stderr: "" May 12 16:21:59.547: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 STEP: Collecting events from namespace "kubectl-659". STEP: Found 5 events. May 12 16:22:00.202: INFO: At 2020-05-12 16:21:32 +0000 UTC - event for logs-generator: {default-scheduler } Scheduled: Successfully assigned kubectl-659/logs-generator to jerma-worker2 May 12 16:22:00.202: INFO: At 2020-05-12 16:21:34 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine May 12 16:22:00.202: INFO: At 2020-05-12 16:21:39 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Created: Created container logs-generator May 12 16:22:00.202: INFO: At 2020-05-12 16:21:40 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Started: Started container logs-generator May 12 16:22:00.202: INFO: At 2020-05-12 16:21:44 +0000 UTC - event for logs-generator: {kubelet jerma-worker2} Killing: Stopping container logs-generator May 12 16:22:00.204: INFO: POD NODE PHASE GRACE CONDITIONS May 12 16:22:00.204: INFO: May 12 16:22:00.207: INFO: Logging node info for node jerma-control-plane May 12 16:22:00.235: INFO: Node Info: &Node{ObjectMeta:{jerma-control-plane /api/v1/nodes/jerma-control-plane a3f47ead-f913-4a01-918b-faa66ed74dd8 15608468 0 2020-03-15 18:25:55 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-control-plane kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-05-12 16:19:36 +0000 UTC,LastTransitionTime:2020-03-15 18:25:55 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-05-12 16:19:36 +0000 UTC,LastTransitionTime:2020-03-15 18:25:55 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-05-12 16:19:36 +0000 UTC,LastTransitionTime:2020-03-15 18:25:55 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-05-12 16:19:36 +0000 UTC,LastTransitionTime:2020-03-15 18:26:27 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.9,},NodeAddress{Type:Hostname,Address:jerma-control-plane,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3bcfb16fe77247d3af07bed975350d5c,SystemUUID:947a2db5-5527-4203-8af5-13d97ffe8a80,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2-31-gaa877d78,KubeletVersion:v1.17.2,KubeProxyVersion:v1.17.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.2],SizeBytes:144352049,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.2],SizeBytes:132096126,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.2],SizeBytes:131180355,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.2],SizeBytes:111937841,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 16:22:00.236: INFO: Logging kubelet events for node jerma-control-plane May 12 16:22:00.238: INFO: Logging pods the kubelet thinks is on node jerma-control-plane May 12 16:22:00.371: INFO: kube-apiserver-jerma-control-plane started at 2020-03-15 18:25:57 +0000 UTC (0+1 container statuses recorded) May 12 16:22:00.371: INFO: Container kube-apiserver ready: true, restart count 0 May 12 16:22:00.371: INFO: kube-controller-manager-jerma-control-plane started at 2020-03-15 18:25:57 +0000 UTC (0+1 container statuses recorded) May 12 16:22:00.371: INFO: Container kube-controller-manager ready: true, restart count 0 May 12 16:22:00.371: INFO: local-path-provisioner-85445b74d4-7mg5w started at 2020-03-15 18:26:27 +0000 UTC (0+1 container statuses recorded) May 12 16:22:00.371: INFO: Container local-path-provisioner ready: true, restart count 0 May 12 16:22:00.371: INFO: kindnet-bjddj started at 2020-03-15 18:26:13 +0000 UTC (0+1 container statuses recorded) May 12 16:22:00.371: INFO: Container kindnet-cni ready: true, restart count 0 May 12 16:22:00.371: INFO: coredns-6955765f44-svxk5 started at 2020-03-15 18:26:28 +0000 UTC (0+1 container statuses recorded) May 12 16:22:00.371: INFO: Container coredns ready: true, restart count 0 May 12 16:22:00.371: INFO: coredns-6955765f44-rll5s started at 2020-03-15 18:26:28 +0000 UTC (0+1 container statuses recorded) May 12 16:22:00.371: INFO: Container coredns ready: true, restart count 0 May 12 16:22:00.371: INFO: etcd-jerma-control-plane started at 2020-03-15 18:25:57 +0000 UTC (0+1 container statuses recorded) May 12 16:22:00.371: INFO: Container etcd ready: true, restart count 0 May 12 16:22:00.371: INFO: kube-scheduler-jerma-control-plane started at 2020-03-15 18:25:57 +0000 UTC (0+1 container statuses recorded) May 12 16:22:00.371: INFO: Container kube-scheduler ready: true, restart count 0 May 12 16:22:00.371: INFO: kube-proxy-mm9zd started at 2020-03-15 18:26:13 +0000 UTC (0+1 container statuses recorded) May 12 16:22:00.371: INFO: Container kube-proxy ready: true, restart count 0 W0512 16:22:00.391776 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 16:22:00.507: INFO: Latency metrics for node jerma-control-plane May 12 16:22:00.507: INFO: Logging node info for node jerma-worker May 12 16:22:00.609: INFO: Node Info: &Node{ObjectMeta:{jerma-worker /api/v1/nodes/jerma-worker d3be6d4b-da1a-4024-b031-0d2aac4bfa20 15608685 0 2020-03-15 18:26:33 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-worker kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-05-12 16:20:11 +0000 UTC,LastTransitionTime:2020-03-15 18:26:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-05-12 16:20:11 +0000 UTC,LastTransitionTime:2020-03-15 18:26:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-05-12 16:20:11 +0000 UTC,LastTransitionTime:2020-03-15 18:26:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-05-12 16:20:11 +0000 UTC,LastTransitionTime:2020-03-15 18:27:43 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.10,},NodeAddress{Type:Hostname,Address:jerma-worker,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a1961fc66ec8469d814538695177d17d,SystemUUID:0df80521-e1b3-45a7-be2b-b3bd800b8699,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2-31-gaa877d78,KubeletVersion:v1.17.2,KubeProxyVersion:v1.17.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-sprout@sha256:44590682de48854faeccc1f4c7de39cb666014a0c4e3abd93adcccad3208a6e2 docker.io/ollivier/clearwater-sprout:latest],SizeBytes:298307172,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd:3.4.3-0],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.2],SizeBytes:144352049,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.2],SizeBytes:132096126,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.2],SizeBytes:131180355,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.2],SizeBytes:111937841,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[docker.io/ollivier/clearwater-live-test@sha256:c2efaddff058c146b93517d06a3a8066b6e88fecdd98fa6847cb69db22555f04 docker.io/ollivier/clearwater-live-test:latest],SizeBytes:46948523,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:45599269,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:5e21ed2c67f8015ed449f4402c942d8200a0b59cc0b518744e2e45a3de219ba9 docker.io/aquasec/kube-bench:latest],SizeBytes:8028777,},ContainerImage{Names:[quay.io/coreos/etcd:v2.2.5],SizeBytes:7670543,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:4331310,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:1799936,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[docker.io/library/busybox@sha256:89b54451a47954c0422d873d438509dae87d478f1cb5d67fb130072f67ca5d25 docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12 docker.io/library/busybox:latest],SizeBytes:764739,},ContainerImage{Names:[docker.io/library/busybox@sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135],SizeBytes:764687,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 16:22:00.610: INFO: Logging kubelet events for node jerma-worker May 12 16:22:00.612: INFO: Logging pods the kubelet thinks is on node jerma-worker May 12 16:22:00.615: INFO: kindnet-c5svj started at 2020-03-15 18:26:33 +0000 UTC (0+1 container statuses recorded) May 12 16:22:00.615: INFO: Container kindnet-cni ready: true, restart count 0 May 12 16:22:00.615: INFO: kube-proxy-44mlz started at 2020-03-15 18:26:33 +0000 UTC (0+1 container statuses recorded) May 12 16:22:00.615: INFO: Container kube-proxy ready: true, restart count 0 W0512 16:22:00.617894 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 16:22:00.683: INFO: Latency metrics for node jerma-worker May 12 16:22:00.684: INFO: Logging node info for node jerma-worker2 May 12 16:22:00.687: INFO: Node Info: &Node{ObjectMeta:{jerma-worker2 /api/v1/nodes/jerma-worker2 9b2e5b39-8dbb-4119-80fd-75a84fb601d7 15608269 0 2020-03-15 18:26:33 +0000 UTC map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-worker2 kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/run/containerd/containerd.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-05-12 16:18:50 +0000 UTC,LastTransitionTime:2020-03-15 18:26:33 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-05-12 16:18:50 +0000 UTC,LastTransitionTime:2020-03-15 18:26:33 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-05-12 16:18:50 +0000 UTC,LastTransitionTime:2020-03-15 18:26:33 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-05-12 16:18:50 +0000 UTC,LastTransitionTime:2020-03-15 18:27:14 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:172.17.0.8,},NodeAddress{Type:Hostname,Address:jerma-worker2,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f27cacf2d4974d3480d11dd8736e63d5,SystemUUID:6fef03e6-b656-4894-b57f-89d5451db372,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2-31-gaa877d78,KubeletVersion:v1.17.2,KubeProxyVersion:v1.17.2,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/ollivier/clearwater-cassandra@sha256:07e93f55decdc1224fb8d161edb5617d58e3488c1250168337548ccc3e82f6b7 docker.io/ollivier/clearwater-cassandra:latest],SizeBytes:386164043,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead-prov@sha256:141a336f17eaf068dbe8da4b01a832033aed5c09e7fa6349ec091ee30b76c9b1 docker.io/ollivier/clearwater-homestead-prov:latest],SizeBytes:360403156,},ContainerImage{Names:[docker.io/ollivier/clearwater-ellis@sha256:8c84761d2d906e344bc6a85a11451d35696cf684305555611df16ce2615ac816 docker.io/ollivier/clearwater-ellis:latest],SizeBytes:351094667,},ContainerImage{Names:[docker.io/ollivier/clearwater-homer@sha256:19c6d11d2678c44822f07c01c574fed426e3c99003b6af0410f0911d57939d5a docker.io/ollivier/clearwater-homer:latest],SizeBytes:343984685,},ContainerImage{Names:[docker.io/ollivier/clearwater-astaire@sha256:f365f3b72267bef0fd696e4a93c0f3c19fb65ad42a8850fe22873dbadd03fdba docker.io/ollivier/clearwater-astaire:latest],SizeBytes:326777758,},ContainerImage{Names:[docker.io/ollivier/clearwater-bono@sha256:eb98596100b1553c9814b6185863ec53e743eb0370faeeafe16fc1dfe8d02ec3 docker.io/ollivier/clearwater-bono:latest],SizeBytes:303283801,},ContainerImage{Names:[docker.io/ollivier/clearwater-homestead@sha256:0b3c89ab451b09e347657d5f85ed99d47ec3e8689b98916af72b23576926b08d docker.io/ollivier/clearwater-homestead:latest],SizeBytes:294847386,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3],SizeBytes:289997247,},ContainerImage{Names:[docker.io/ollivier/clearwater-ralf@sha256:20069a8d9f366dd0f003afa7c4fbcbcd5e9d2b99abae83540c6538fc7cff6b97 docker.io/ollivier/clearwater-ralf:latest],SizeBytes:287124270,},ContainerImage{Names:[docker.io/ollivier/clearwater-chronos@sha256:8ddcfa68c82ebf0b4ce6add019a8f57c024aec453f47a37017cf7dff8680268a docker.io/ollivier/clearwater-chronos:latest],SizeBytes:285184449,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver:v1.17.2],SizeBytes:144352049,},ContainerImage{Names:[k8s.gcr.io/kube-proxy:v1.17.2],SizeBytes:132096126,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager:v1.17.2],SizeBytes:131180355,},ContainerImage{Names:[docker.io/aquasec/kube-hunter@sha256:563f44851d413c7199a0a8a2a13df1e98bee48229e19f4917e6da68e5482df6e docker.io/aquasec/kube-hunter:latest],SizeBytes:123995068,},ContainerImage{Names:[docker.io/kindest/kindnetd:0.5.4],SizeBytes:113207016,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler:v1.17.2],SizeBytes:111937841,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:85425365,},ContainerImage{Names:[k8s.gcr.io/debian-base:v2.0.0],SizeBytes:53884301,},ContainerImage{Names:[us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12],SizeBytes:45599269,},ContainerImage{Names:[docker.io/rancher/local-path-provisioner:v0.0.12],SizeBytes:41994847,},ContainerImage{Names:[docker.io/library/httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a docker.io/library/httpd:2.4.39-alpine],SizeBytes:41901429,},ContainerImage{Names:[k8s.gcr.io/coredns:1.6.5],SizeBytes:41705951,},ContainerImage{Names:[docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40765017,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:25311280,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:17444032,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:16222606,},ContainerImage{Names:[docker.io/aquasec/kube-bench@sha256:5e21ed2c67f8015ed449f4402c942d8200a0b59cc0b518744e2e45a3de219ba9 docker.io/aquasec/kube-bench:latest],SizeBytes:8028777,},ContainerImage{Names:[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine],SizeBytes:6978806,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:4381769,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:4331310,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 docker.io/appropriate/curl:latest],SizeBytes:2779755,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:1804628,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:1799936,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[docker.io/library/busybox@sha256:89b54451a47954c0422d873d438509dae87d478f1cb5d67fb130072f67ca5d25 docker.io/library/busybox@sha256:a8cf7ff6367c2afa2a90acd081b484cbded349a7076e7bdf37a05279f276bc12 docker.io/library/busybox:latest],SizeBytes:764739,},ContainerImage{Names:[docker.io/library/busybox@sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135],SizeBytes:764687,},ContainerImage{Names:[k8s.gcr.io/pause:3.1],SizeBytes:746479,},ContainerImage{Names:[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29],SizeBytes:732685,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:927d98197ec1141a368550822d18fa1c60bdae27b78b0c004f705f548c07814f k8s.gcr.io/pause:3.2],SizeBytes:299513,},ContainerImage{Names:[docker.io/kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 docker.io/kubernetes/pause:latest],SizeBytes:74015,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} May 12 16:22:00.687: INFO: Logging kubelet events for node jerma-worker2 May 12 16:22:00.689: INFO: Logging pods the kubelet thinks is on node jerma-worker2 May 12 16:22:00.692: INFO: kindnet-zk6sq started at 2020-03-15 18:26:33 +0000 UTC (0+1 container statuses recorded) May 12 16:22:00.692: INFO: Container kindnet-cni ready: true, restart count 0 May 12 16:22:00.693: INFO: kube-bench-hk6h6 started at 2020-03-26 15:21:52 +0000 UTC (0+1 container statuses recorded) May 12 16:22:00.693: INFO: Container kube-bench ready: false, restart count 0 May 12 16:22:00.693: INFO: kube-proxy-75q42 started at 2020-03-15 18:26:33 +0000 UTC (0+1 container statuses recorded) May 12 16:22:00.693: INFO: Container kube-proxy ready: true, restart count 0 May 12 16:22:00.693: INFO: kube-hunter-8g6pb started at 2020-03-26 15:21:33 +0000 UTC (0+1 container statuses recorded) May 12 16:22:00.693: INFO: Container kube-hunter ready: false, restart count 0 W0512 16:22:00.695338 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 16:22:00.763: INFO: Latency metrics for node jerma-worker2 May 12 16:22:00.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-659" for this suite. • Failure [31.818 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 16:21:43.914: Expected : 2 to equal : 1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1410 ------------------------------ {"msg":"FAILED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":30,"skipped":643,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:22:00.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 12 16:22:01.441: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b83e51b8-95cb-4cd2-b158-d42b8eaccaac" in namespace "projected-7791" to be "success or failure" May 12 16:22:01.580: INFO: Pod "downwardapi-volume-b83e51b8-95cb-4cd2-b158-d42b8eaccaac": Phase="Pending", Reason="", readiness=false. Elapsed: 138.052212ms May 12 16:22:03.583: INFO: Pod "downwardapi-volume-b83e51b8-95cb-4cd2-b158-d42b8eaccaac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141557672s May 12 16:22:05.587: INFO: Pod "downwardapi-volume-b83e51b8-95cb-4cd2-b158-d42b8eaccaac": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145055548s May 12 16:22:07.759: INFO: Pod "downwardapi-volume-b83e51b8-95cb-4cd2-b158-d42b8eaccaac": Phase="Pending", Reason="", readiness=false. Elapsed: 6.317138518s May 12 16:22:09.788: INFO: Pod "downwardapi-volume-b83e51b8-95cb-4cd2-b158-d42b8eaccaac": Phase="Running", Reason="", readiness=true. Elapsed: 8.346434616s May 12 16:22:11.903: INFO: Pod "downwardapi-volume-b83e51b8-95cb-4cd2-b158-d42b8eaccaac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.461460025s STEP: Saw pod success May 12 16:22:11.903: INFO: Pod "downwardapi-volume-b83e51b8-95cb-4cd2-b158-d42b8eaccaac" satisfied condition "success or failure" May 12 16:22:11.905: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-b83e51b8-95cb-4cd2-b158-d42b8eaccaac container client-container: STEP: delete the pod May 12 16:22:12.127: INFO: Waiting for pod downwardapi-volume-b83e51b8-95cb-4cd2-b158-d42b8eaccaac to disappear May 12 16:22:12.133: INFO: Pod downwardapi-volume-b83e51b8-95cb-4cd2-b158-d42b8eaccaac no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:22:12.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7791" for this suite. • [SLOW TEST:11.366 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":657,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:22:12.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 16:22:13.143: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 16:22:15.216: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897333, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897333, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897333, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897333, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 16:22:17.363: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897333, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897333, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897333, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897333, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 16:22:19.628: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897333, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897333, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897333, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897333, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 16:22:22.839: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:22:24.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1673" for this suite. STEP: Destroying namespace "webhook-1673-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.327 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":32,"skipped":657,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:22:24.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 16:22:24.579: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:22:34.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3490" for this suite. • [SLOW TEST:10.165 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":657,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:22:34.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 12 16:22:34.778: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c9322b29-7942-4f9b-bb2e-a19a61cca835" in namespace "downward-api-6426" to be "success or failure" May 12 16:22:34.854: INFO: Pod "downwardapi-volume-c9322b29-7942-4f9b-bb2e-a19a61cca835": Phase="Pending", Reason="", readiness=false. Elapsed: 76.071801ms May 12 16:22:36.858: INFO: Pod "downwardapi-volume-c9322b29-7942-4f9b-bb2e-a19a61cca835": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079906084s May 12 16:22:38.950: INFO: Pod "downwardapi-volume-c9322b29-7942-4f9b-bb2e-a19a61cca835": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171991997s May 12 16:22:41.353: INFO: Pod "downwardapi-volume-c9322b29-7942-4f9b-bb2e-a19a61cca835": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.575151042s STEP: Saw pod success May 12 16:22:41.353: INFO: Pod "downwardapi-volume-c9322b29-7942-4f9b-bb2e-a19a61cca835" satisfied condition "success or failure" May 12 16:22:41.355: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-c9322b29-7942-4f9b-bb2e-a19a61cca835 container client-container: STEP: delete the pod May 12 16:22:41.747: INFO: Waiting for pod downwardapi-volume-c9322b29-7942-4f9b-bb2e-a19a61cca835 to disappear May 12 16:22:41.752: INFO: Pod downwardapi-volume-c9322b29-7942-4f9b-bb2e-a19a61cca835 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:22:41.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6426" for this suite. • [SLOW TEST:7.132 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":671,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:22:41.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6380 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-6380 I0512 16:22:42.383554 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6380, replica count: 2 I0512 16:22:45.434038 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 16:22:48.434235 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 16:22:51.434421 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 16:22:51.434: INFO: Creating new exec pod May 12 16:22:58.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6380 execpodnz6cm -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 12 16:22:58.839: INFO: stderr: "I0512 16:22:58.765043 908 log.go:172] (0xc000b30000) (0xc00062e780) Create stream\nI0512 16:22:58.765098 908 log.go:172] (0xc000b30000) (0xc00062e780) Stream added, broadcasting: 1\nI0512 16:22:58.767920 908 log.go:172] (0xc000b30000) Reply frame received for 1\nI0512 16:22:58.767987 908 log.go:172] (0xc000b30000) (0xc0003c1540) Create stream\nI0512 16:22:58.768014 908 log.go:172] (0xc000b30000) (0xc0003c1540) Stream added, broadcasting: 3\nI0512 16:22:58.768736 908 log.go:172] (0xc000b30000) Reply frame received for 3\nI0512 16:22:58.768766 908 log.go:172] (0xc000b30000) (0xc000a28000) Create stream\nI0512 16:22:58.768777 908 log.go:172] (0xc000b30000) (0xc000a28000) Stream added, broadcasting: 5\nI0512 16:22:58.769693 908 log.go:172] (0xc000b30000) Reply frame received for 5\nI0512 16:22:58.831836 908 log.go:172] (0xc000b30000) Data frame received for 5\nI0512 16:22:58.831873 908 log.go:172] (0xc000a28000) (5) Data frame handling\nI0512 16:22:58.831900 908 log.go:172] (0xc000a28000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0512 16:22:58.832683 908 log.go:172] (0xc000b30000) Data frame received for 5\nI0512 16:22:58.832698 908 log.go:172] (0xc000a28000) (5) Data frame handling\nI0512 16:22:58.832704 908 log.go:172] (0xc000a28000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0512 16:22:58.833077 908 log.go:172] (0xc000b30000) Data frame received for 5\nI0512 16:22:58.833092 908 log.go:172] (0xc000a28000) (5) Data frame handling\nI0512 16:22:58.833291 908 log.go:172] (0xc000b30000) Data frame received for 3\nI0512 16:22:58.833342 908 log.go:172] (0xc0003c1540) (3) Data frame handling\nI0512 16:22:58.835178 908 log.go:172] (0xc000b30000) Data frame received for 1\nI0512 16:22:58.835193 908 log.go:172] (0xc00062e780) (1) Data frame handling\nI0512 16:22:58.835204 908 log.go:172] (0xc00062e780) (1) Data frame sent\nI0512 16:22:58.835225 908 log.go:172] (0xc000b30000) (0xc00062e780) Stream removed, broadcasting: 1\nI0512 16:22:58.835235 908 log.go:172] (0xc000b30000) Go away received\nI0512 16:22:58.835538 908 log.go:172] (0xc000b30000) (0xc00062e780) Stream removed, broadcasting: 1\nI0512 16:22:58.835556 908 log.go:172] (0xc000b30000) (0xc0003c1540) Stream removed, broadcasting: 3\nI0512 16:22:58.835564 908 log.go:172] (0xc000b30000) (0xc000a28000) Stream removed, broadcasting: 5\n" May 12 16:22:58.839: INFO: stdout: "" May 12 16:22:58.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6380 execpodnz6cm -- /bin/sh -x -c nc -zv -t -w 2 10.96.47.52 80' May 12 16:22:59.025: INFO: stderr: "I0512 16:22:58.951862 928 log.go:172] (0xc000612dc0) (0xc0005d1f40) Create stream\nI0512 16:22:58.951911 928 log.go:172] (0xc000612dc0) (0xc0005d1f40) Stream added, broadcasting: 1\nI0512 16:22:58.954639 928 log.go:172] (0xc000612dc0) Reply frame received for 1\nI0512 16:22:58.954681 928 log.go:172] (0xc000612dc0) (0xc000522820) Create stream\nI0512 16:22:58.954696 928 log.go:172] (0xc000612dc0) (0xc000522820) Stream added, broadcasting: 3\nI0512 16:22:58.955482 928 log.go:172] (0xc000612dc0) Reply frame received for 3\nI0512 16:22:58.955514 928 log.go:172] (0xc000612dc0) (0xc00021b5e0) Create stream\nI0512 16:22:58.955527 928 log.go:172] (0xc000612dc0) (0xc00021b5e0) Stream added, broadcasting: 5\nI0512 16:22:58.956453 928 log.go:172] (0xc000612dc0) Reply frame received for 5\nI0512 16:22:59.020875 928 log.go:172] (0xc000612dc0) Data frame received for 3\nI0512 16:22:59.020892 928 log.go:172] (0xc000522820) (3) Data frame handling\nI0512 16:22:59.020940 928 log.go:172] (0xc000612dc0) Data frame received for 5\nI0512 16:22:59.020962 928 log.go:172] (0xc00021b5e0) (5) Data frame handling\nI0512 16:22:59.021000 928 log.go:172] (0xc00021b5e0) (5) Data frame sent\nI0512 16:22:59.021019 928 log.go:172] (0xc000612dc0) Data frame received for 5\nI0512 16:22:59.021032 928 log.go:172] (0xc00021b5e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.47.52 80\nConnection to 10.96.47.52 80 port [tcp/http] succeeded!\nI0512 16:22:59.022020 928 log.go:172] (0xc000612dc0) Data frame received for 1\nI0512 16:22:59.022039 928 log.go:172] (0xc0005d1f40) (1) Data frame handling\nI0512 16:22:59.022051 928 log.go:172] (0xc0005d1f40) (1) Data frame sent\nI0512 16:22:59.022060 928 log.go:172] (0xc000612dc0) (0xc0005d1f40) Stream removed, broadcasting: 1\nI0512 16:22:59.022134 928 log.go:172] (0xc000612dc0) Go away received\nI0512 16:22:59.022265 928 log.go:172] (0xc000612dc0) (0xc0005d1f40) Stream removed, broadcasting: 1\nI0512 16:22:59.022273 928 log.go:172] (0xc000612dc0) (0xc000522820) Stream removed, broadcasting: 3\nI0512 16:22:59.022278 928 log.go:172] (0xc000612dc0) (0xc00021b5e0) Stream removed, broadcasting: 5\n" May 12 16:22:59.025: INFO: stdout: "" May 12 16:22:59.025: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:22:59.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6380" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:18.164 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":35,"skipped":681,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:22:59.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-8393 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8393 to expose endpoints map[] May 12 16:23:00.178: INFO: Get endpoints failed (5.989855ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 12 16:23:01.181: INFO: successfully validated that service endpoint-test2 in namespace services-8393 exposes endpoints map[] (1.00910223s elapsed) STEP: Creating pod pod1 in namespace services-8393 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8393 to expose endpoints map[pod1:[80]] May 12 16:23:05.532: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.345751667s elapsed, will retry) May 12 16:23:08.100: INFO: successfully validated that service endpoint-test2 in namespace services-8393 exposes endpoints map[pod1:[80]] (6.913862314s elapsed) STEP: Creating pod pod2 in namespace services-8393 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8393 to expose endpoints map[pod1:[80] pod2:[80]] May 12 16:23:13.716: INFO: Unexpected endpoints: found map[a7c5d2da-bfcd-4f37-a18e-ac99bad199a6:[80]], expected map[pod1:[80] pod2:[80]] (5.613248296s elapsed, will retry) May 12 16:23:15.832: INFO: successfully validated that service endpoint-test2 in namespace services-8393 exposes endpoints map[pod1:[80] pod2:[80]] (7.72899863s elapsed) STEP: Deleting pod pod1 in namespace services-8393 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8393 to expose endpoints map[pod2:[80]] May 12 16:23:17.410: INFO: successfully validated that service endpoint-test2 in namespace services-8393 exposes endpoints map[pod2:[80]] (1.566568442s elapsed) STEP: Deleting pod pod2 in namespace services-8393 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8393 to expose endpoints map[] May 12 16:23:17.899: INFO: successfully validated that service endpoint-test2 in namespace services-8393 exposes endpoints map[] (484.974274ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:23:18.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8393" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:18.344 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":36,"skipped":686,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:23:18.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 12 16:23:19.187: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:23:33.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6791" for this suite. • [SLOW TEST:15.598 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":37,"skipped":694,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:23:33.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-djf6 STEP: Creating a pod to test atomic-volume-subpath May 12 16:23:34.313: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-djf6" in namespace "subpath-2740" to be "success or failure" May 12 16:23:34.319: INFO: Pod "pod-subpath-test-configmap-djf6": Phase="Pending", Reason="", readiness=false. Elapsed: 5.395802ms May 12 16:23:36.323: INFO: Pod "pod-subpath-test-configmap-djf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009379636s May 12 16:23:38.342: INFO: Pod "pod-subpath-test-configmap-djf6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0286658s May 12 16:23:40.526: INFO: Pod "pod-subpath-test-configmap-djf6": Phase="Running", Reason="", readiness=true. Elapsed: 6.212790282s May 12 16:23:42.642: INFO: Pod "pod-subpath-test-configmap-djf6": Phase="Running", Reason="", readiness=true. Elapsed: 8.328463356s May 12 16:23:44.645: INFO: Pod "pod-subpath-test-configmap-djf6": Phase="Running", Reason="", readiness=true. Elapsed: 10.332143006s May 12 16:23:46.648: INFO: Pod "pod-subpath-test-configmap-djf6": Phase="Running", Reason="", readiness=true. Elapsed: 12.335114824s May 12 16:23:48.713: INFO: Pod "pod-subpath-test-configmap-djf6": Phase="Running", Reason="", readiness=true. Elapsed: 14.400053555s May 12 16:23:50.716: INFO: Pod "pod-subpath-test-configmap-djf6": Phase="Running", Reason="", readiness=true. Elapsed: 16.403216909s May 12 16:23:52.767: INFO: Pod "pod-subpath-test-configmap-djf6": Phase="Running", Reason="", readiness=true. Elapsed: 18.453394122s May 12 16:23:54.770: INFO: Pod "pod-subpath-test-configmap-djf6": Phase="Running", Reason="", readiness=true. Elapsed: 20.456864376s May 12 16:23:56.773: INFO: Pod "pod-subpath-test-configmap-djf6": Phase="Running", Reason="", readiness=true. Elapsed: 22.460053336s May 12 16:23:58.833: INFO: Pod "pod-subpath-test-configmap-djf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.519870512s STEP: Saw pod success May 12 16:23:58.833: INFO: Pod "pod-subpath-test-configmap-djf6" satisfied condition "success or failure" May 12 16:23:58.836: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-djf6 container test-container-subpath-configmap-djf6: STEP: delete the pod May 12 16:23:58.865: INFO: Waiting for pod pod-subpath-test-configmap-djf6 to disappear May 12 16:23:59.348: INFO: Pod pod-subpath-test-configmap-djf6 no longer exists STEP: Deleting pod pod-subpath-test-configmap-djf6 May 12 16:23:59.348: INFO: Deleting pod "pod-subpath-test-configmap-djf6" in namespace "subpath-2740" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:23:59.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2740" for this suite. • [SLOW TEST:25.487 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":38,"skipped":703,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:23:59.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-24ef2ac0-4c5c-48e5-8cdb-529b13c15794 STEP: Creating a pod to test consume configMaps May 12 16:24:00.068: INFO: Waiting up to 5m0s for pod "pod-configmaps-9b75d8ec-3c32-41a2-9a5e-a86e85c27ca1" in namespace "configmap-3266" to be "success or failure" May 12 16:24:00.150: INFO: Pod "pod-configmaps-9b75d8ec-3c32-41a2-9a5e-a86e85c27ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 82.577145ms May 12 16:24:02.168: INFO: Pod "pod-configmaps-9b75d8ec-3c32-41a2-9a5e-a86e85c27ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100566524s May 12 16:24:04.180: INFO: Pod "pod-configmaps-9b75d8ec-3c32-41a2-9a5e-a86e85c27ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11207113s May 12 16:24:06.303: INFO: Pod "pod-configmaps-9b75d8ec-3c32-41a2-9a5e-a86e85c27ca1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.235388728s STEP: Saw pod success May 12 16:24:06.303: INFO: Pod "pod-configmaps-9b75d8ec-3c32-41a2-9a5e-a86e85c27ca1" satisfied condition "success or failure" May 12 16:24:06.307: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-9b75d8ec-3c32-41a2-9a5e-a86e85c27ca1 container configmap-volume-test: STEP: delete the pod May 12 16:24:06.336: INFO: Waiting for pod pod-configmaps-9b75d8ec-3c32-41a2-9a5e-a86e85c27ca1 to disappear May 12 16:24:06.474: INFO: Pod pod-configmaps-9b75d8ec-3c32-41a2-9a5e-a86e85c27ca1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:24:06.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3266" for this suite. • [SLOW TEST:7.327 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":710,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:24:06.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 16:24:06.916: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 15.834276ms) May 12 16:24:06.924: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 7.966885ms) May 12 16:24:06.928: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.568821ms) May 12 16:24:06.931: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.307937ms) May 12 16:24:06.934: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.863265ms) May 12 16:24:06.936: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.13976ms) May 12 16:24:06.939: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.712155ms) May 12 16:24:06.942: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.937291ms) May 12 16:24:06.971: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 29.455901ms) May 12 16:24:06.976: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 4.692713ms) May 12 16:24:06.989: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 12.949574ms) May 12 16:24:06.992: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.928736ms) May 12 16:24:07.080: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 87.507806ms) May 12 16:24:07.083: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.830931ms) May 12 16:24:07.087: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.243021ms) May 12 16:24:07.090: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.648589ms) May 12 16:24:07.094: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.206873ms) May 12 16:24:07.096: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.711368ms) May 12 16:24:07.099: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.988854ms) May 12 16:24:07.102: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.541648ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:24:07.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2129" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":40,"skipped":751,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:24:07.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod May 12 16:24:07.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3019' May 12 16:24:07.684: INFO: stderr: "" May 12 16:24:07.684: INFO: stdout: "pod/pause created\n" May 12 16:24:07.684: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 12 16:24:07.685: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-3019" to be "running and ready" May 12 16:24:07.791: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 106.456297ms May 12 16:24:10.122: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.437384109s May 12 16:24:12.216: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.531640738s May 12 16:24:12.216: INFO: Pod "pause" satisfied condition "running and ready" May 12 16:24:12.216: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod May 12 16:24:12.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-3019' May 12 16:24:12.354: INFO: stderr: "" May 12 16:24:12.354: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 12 16:24:12.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3019' May 12 16:24:12.467: INFO: stderr: "" May 12 16:24:12.467: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 12 16:24:12.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-3019' May 12 16:24:12.648: INFO: stderr: "" May 12 16:24:12.648: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 12 16:24:12.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-3019' May 12 16:24:12.839: INFO: stderr: "" May 12 16:24:12.839: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources May 12 16:24:12.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3019' May 12 16:24:13.002: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 16:24:13.002: INFO: stdout: "pod \"pause\" force deleted\n" May 12 16:24:13.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-3019' May 12 16:24:13.181: INFO: stderr: "No resources found in kubectl-3019 namespace.\n" May 12 16:24:13.181: INFO: stdout: "" May 12 16:24:13.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-3019 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 16:24:13.276: INFO: stderr: "" May 12 16:24:13.276: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:24:13.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3019" for this suite. • [SLOW TEST:6.171 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":41,"skipped":783,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:24:13.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-86afc4b6-fdd6-40f9-9f49-cca7d4555b30 in namespace container-probe-5744 May 12 16:24:19.573: INFO: Started pod liveness-86afc4b6-fdd6-40f9-9f49-cca7d4555b30 in namespace container-probe-5744 STEP: checking the pod's current state and verifying that restartCount is present May 12 16:24:19.576: INFO: Initial restart count of pod liveness-86afc4b6-fdd6-40f9-9f49-cca7d4555b30 is 0 May 12 16:24:45.936: INFO: Restart count of pod container-probe-5744/liveness-86afc4b6-fdd6-40f9-9f49-cca7d4555b30 is now 1 (26.359826995s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:24:46.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5744" for this suite. • [SLOW TEST:33.397 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":804,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:24:46.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:24:59.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3466" for this suite. • [SLOW TEST:12.493 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":43,"skipped":827,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:24:59.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-8197 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8197 STEP: Deleting pre-stop pod May 12 16:25:16.555: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:25:16.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8197" for this suite. • [SLOW TEST:17.531 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":44,"skipped":840,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:25:16.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-655 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 16:25:17.495: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 16:25:46.853: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.70:8080/dial?request=hostname&protocol=http&host=10.244.1.69&port=8080&tries=1'] Namespace:pod-network-test-655 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 16:25:46.853: INFO: >>> kubeConfig: /root/.kube/config I0512 16:25:46.879238 7 log.go:172] (0xc001aefe40) (0xc0026ef680) Create stream I0512 16:25:46.879265 7 log.go:172] (0xc001aefe40) (0xc0026ef680) Stream added, broadcasting: 1 I0512 16:25:46.881063 7 log.go:172] (0xc001aefe40) Reply frame received for 1 I0512 16:25:46.881242 7 log.go:172] (0xc001aefe40) (0xc002210000) Create stream I0512 16:25:46.881266 7 log.go:172] (0xc001aefe40) (0xc002210000) Stream added, broadcasting: 3 I0512 16:25:46.881992 7 log.go:172] (0xc001aefe40) Reply frame received for 3 I0512 16:25:46.882021 7 log.go:172] (0xc001aefe40) (0xc0026ef720) Create stream I0512 16:25:46.882032 7 log.go:172] (0xc001aefe40) (0xc0026ef720) Stream added, broadcasting: 5 I0512 16:25:46.882746 7 log.go:172] (0xc001aefe40) Reply frame received for 5 I0512 16:25:46.936075 7 log.go:172] (0xc001aefe40) Data frame received for 3 I0512 16:25:46.936089 7 log.go:172] (0xc002210000) (3) Data frame handling I0512 16:25:46.936103 7 log.go:172] (0xc002210000) (3) Data frame sent I0512 16:25:46.936510 7 log.go:172] (0xc001aefe40) Data frame received for 5 I0512 16:25:46.936520 7 log.go:172] (0xc0026ef720) (5) Data frame handling I0512 16:25:46.936669 7 log.go:172] (0xc001aefe40) Data frame received for 3 I0512 16:25:46.936685 7 log.go:172] (0xc002210000) (3) Data frame handling I0512 16:25:46.937752 7 log.go:172] (0xc001aefe40) Data frame received for 1 I0512 16:25:46.937779 7 log.go:172] (0xc0026ef680) (1) Data frame handling I0512 16:25:46.937794 7 log.go:172] (0xc0026ef680) (1) Data frame sent I0512 16:25:46.937811 7 log.go:172] (0xc001aefe40) (0xc0026ef680) Stream removed, broadcasting: 1 I0512 16:25:46.937837 7 log.go:172] (0xc001aefe40) Go away received I0512 16:25:46.937971 7 log.go:172] (0xc001aefe40) (0xc0026ef680) Stream removed, broadcasting: 1 I0512 16:25:46.937991 7 log.go:172] (0xc001aefe40) (0xc002210000) Stream removed, broadcasting: 3 I0512 16:25:46.938001 7 log.go:172] (0xc001aefe40) (0xc0026ef720) Stream removed, broadcasting: 5 May 12 16:25:46.938: INFO: Waiting for responses: map[] May 12 16:25:46.940: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.70:8080/dial?request=hostname&protocol=http&host=10.244.2.213&port=8080&tries=1'] Namespace:pod-network-test-655 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 16:25:46.940: INFO: >>> kubeConfig: /root/.kube/config I0512 16:25:46.963503 7 log.go:172] (0xc001e282c0) (0xc002680e60) Create stream I0512 16:25:46.963524 7 log.go:172] (0xc001e282c0) (0xc002680e60) Stream added, broadcasting: 1 I0512 16:25:46.966404 7 log.go:172] (0xc001e282c0) Reply frame received for 1 I0512 16:25:46.966440 7 log.go:172] (0xc001e282c0) (0xc0026ef9a0) Create stream I0512 16:25:46.966450 7 log.go:172] (0xc001e282c0) (0xc0026ef9a0) Stream added, broadcasting: 3 I0512 16:25:46.967319 7 log.go:172] (0xc001e282c0) Reply frame received for 3 I0512 16:25:46.967346 7 log.go:172] (0xc001e282c0) (0xc002680f00) Create stream I0512 16:25:46.967358 7 log.go:172] (0xc001e282c0) (0xc002680f00) Stream added, broadcasting: 5 I0512 16:25:46.968190 7 log.go:172] (0xc001e282c0) Reply frame received for 5 I0512 16:25:47.014852 7 log.go:172] (0xc001e282c0) Data frame received for 3 I0512 16:25:47.014879 7 log.go:172] (0xc0026ef9a0) (3) Data frame handling I0512 16:25:47.014897 7 log.go:172] (0xc0026ef9a0) (3) Data frame sent I0512 16:25:47.015761 7 log.go:172] (0xc001e282c0) Data frame received for 3 I0512 16:25:47.015796 7 log.go:172] (0xc0026ef9a0) (3) Data frame handling I0512 16:25:47.015831 7 log.go:172] (0xc001e282c0) Data frame received for 5 I0512 16:25:47.015847 7 log.go:172] (0xc002680f00) (5) Data frame handling I0512 16:25:47.016920 7 log.go:172] (0xc001e282c0) Data frame received for 1 I0512 16:25:47.016944 7 log.go:172] (0xc002680e60) (1) Data frame handling I0512 16:25:47.016962 7 log.go:172] (0xc002680e60) (1) Data frame sent I0512 16:25:47.016982 7 log.go:172] (0xc001e282c0) (0xc002680e60) Stream removed, broadcasting: 1 I0512 16:25:47.017000 7 log.go:172] (0xc001e282c0) Go away received I0512 16:25:47.017256 7 log.go:172] (0xc001e282c0) (0xc002680e60) Stream removed, broadcasting: 1 I0512 16:25:47.017289 7 log.go:172] (0xc001e282c0) (0xc0026ef9a0) Stream removed, broadcasting: 3 I0512 16:25:47.017306 7 log.go:172] (0xc001e282c0) (0xc002680f00) Stream removed, broadcasting: 5 May 12 16:25:47.017: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:25:47.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-655" for this suite. • [SLOW TEST:30.320 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":849,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:25:47.024: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 16:26:11.705: INFO: Container started at 2020-05-12 16:25:53 +0000 UTC, pod became ready at 2020-05-12 16:26:10 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:26:11.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1525" for this suite. • [SLOW TEST:24.689 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":850,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:26:11.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 16:26:13.046: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 16:26:15.055: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897573, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897573, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897573, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897573, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 16:26:17.560: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897573, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897573, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897573, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897573, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 16:26:19.064: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897573, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897573, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897573, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897573, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 16:26:22.262: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:26:22.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5153" for this suite. STEP: Destroying namespace "webhook-5153-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.143 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":47,"skipped":885,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:26:23.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-1179 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 16:26:25.096: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 16:26:55.752: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.73:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1179 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 16:26:55.752: INFO: >>> kubeConfig: /root/.kube/config I0512 16:26:55.792536 7 log.go:172] (0xc0044da6e0) (0xc0027cefa0) Create stream I0512 16:26:55.792586 7 log.go:172] (0xc0044da6e0) (0xc0027cefa0) Stream added, broadcasting: 1 I0512 16:26:55.795404 7 log.go:172] (0xc0044da6e0) Reply frame received for 1 I0512 16:26:55.795460 7 log.go:172] (0xc0044da6e0) (0xc002210140) Create stream I0512 16:26:55.795484 7 log.go:172] (0xc0044da6e0) (0xc002210140) Stream added, broadcasting: 3 I0512 16:26:55.796640 7 log.go:172] (0xc0044da6e0) Reply frame received for 3 I0512 16:26:55.796670 7 log.go:172] (0xc0044da6e0) (0xc0027cf040) Create stream I0512 16:26:55.796683 7 log.go:172] (0xc0044da6e0) (0xc0027cf040) Stream added, broadcasting: 5 I0512 16:26:55.797876 7 log.go:172] (0xc0044da6e0) Reply frame received for 5 I0512 16:26:55.852197 7 log.go:172] (0xc0044da6e0) Data frame received for 3 I0512 16:26:55.852228 7 log.go:172] (0xc002210140) (3) Data frame handling I0512 16:26:55.852243 7 log.go:172] (0xc002210140) (3) Data frame sent I0512 16:26:55.852250 7 log.go:172] (0xc0044da6e0) Data frame received for 3 I0512 16:26:55.852258 7 log.go:172] (0xc002210140) (3) Data frame handling I0512 16:26:55.852546 7 log.go:172] (0xc0044da6e0) Data frame received for 5 I0512 16:26:55.852580 7 log.go:172] (0xc0027cf040) (5) Data frame handling I0512 16:26:55.854127 7 log.go:172] (0xc0044da6e0) Data frame received for 1 I0512 16:26:55.854188 7 log.go:172] (0xc0027cefa0) (1) Data frame handling I0512 16:26:55.854216 7 log.go:172] (0xc0027cefa0) (1) Data frame sent I0512 16:26:55.854228 7 log.go:172] (0xc0044da6e0) (0xc0027cefa0) Stream removed, broadcasting: 1 I0512 16:26:55.854293 7 log.go:172] (0xc0044da6e0) Go away received I0512 16:26:55.854388 7 log.go:172] (0xc0044da6e0) (0xc0027cefa0) Stream removed, broadcasting: 1 I0512 16:26:55.854415 7 log.go:172] (0xc0044da6e0) (0xc002210140) Stream removed, broadcasting: 3 I0512 16:26:55.854431 7 log.go:172] (0xc0044da6e0) (0xc0027cf040) Stream removed, broadcasting: 5 May 12 16:26:55.854: INFO: Found all expected endpoints: [netserver-0] May 12 16:26:55.897: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.214:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1179 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 16:26:55.897: INFO: >>> kubeConfig: /root/.kube/config I0512 16:26:55.923852 7 log.go:172] (0xc0031f22c0) (0xc002210460) Create stream I0512 16:26:55.923884 7 log.go:172] (0xc0031f22c0) (0xc002210460) Stream added, broadcasting: 1 I0512 16:26:55.926345 7 log.go:172] (0xc0031f22c0) Reply frame received for 1 I0512 16:26:55.926395 7 log.go:172] (0xc0031f22c0) (0xc002210500) Create stream I0512 16:26:55.926411 7 log.go:172] (0xc0031f22c0) (0xc002210500) Stream added, broadcasting: 3 I0512 16:26:55.927256 7 log.go:172] (0xc0031f22c0) Reply frame received for 3 I0512 16:26:55.927296 7 log.go:172] (0xc0031f22c0) (0xc002761040) Create stream I0512 16:26:55.927307 7 log.go:172] (0xc0031f22c0) (0xc002761040) Stream added, broadcasting: 5 I0512 16:26:55.928299 7 log.go:172] (0xc0031f22c0) Reply frame received for 5 I0512 16:26:56.006074 7 log.go:172] (0xc0031f22c0) Data frame received for 5 I0512 16:26:56.006133 7 log.go:172] (0xc002761040) (5) Data frame handling I0512 16:26:56.006190 7 log.go:172] (0xc0031f22c0) Data frame received for 3 I0512 16:26:56.006210 7 log.go:172] (0xc002210500) (3) Data frame handling I0512 16:26:56.006244 7 log.go:172] (0xc002210500) (3) Data frame sent I0512 16:26:56.006270 7 log.go:172] (0xc0031f22c0) Data frame received for 3 I0512 16:26:56.006287 7 log.go:172] (0xc002210500) (3) Data frame handling I0512 16:26:56.008287 7 log.go:172] (0xc0031f22c0) Data frame received for 1 I0512 16:26:56.008342 7 log.go:172] (0xc002210460) (1) Data frame handling I0512 16:26:56.008363 7 log.go:172] (0xc002210460) (1) Data frame sent I0512 16:26:56.008382 7 log.go:172] (0xc0031f22c0) (0xc002210460) Stream removed, broadcasting: 1 I0512 16:26:56.008403 7 log.go:172] (0xc0031f22c0) Go away received I0512 16:26:56.008533 7 log.go:172] (0xc0031f22c0) (0xc002210460) Stream removed, broadcasting: 1 I0512 16:26:56.008562 7 log.go:172] (0xc0031f22c0) (0xc002210500) Stream removed, broadcasting: 3 I0512 16:26:56.008577 7 log.go:172] (0xc0031f22c0) (0xc002761040) Stream removed, broadcasting: 5 May 12 16:26:56.008: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:26:56.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1179" for this suite. • [SLOW TEST:32.158 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":906,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:26:56.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 12 16:27:04.469: INFO: Successfully updated pod "annotationupdatee94ebb10-3d82-4425-9b75-d860728aac4b" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:27:05.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8136" for this suite. • [SLOW TEST:9.936 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":917,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:27:05.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:27:28.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3956" for this suite. • [SLOW TEST:22.846 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":50,"skipped":921,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:27:28.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-5653/configmap-test-4e2910c6-76a7-4aac-a8ea-12f6f379f70d STEP: Creating a pod to test consume configMaps May 12 16:27:29.418: INFO: Waiting up to 5m0s for pod "pod-configmaps-632b3c9f-f046-4b59-b5ea-83c419f09fba" in namespace "configmap-5653" to be "success or failure" May 12 16:27:29.426: INFO: Pod "pod-configmaps-632b3c9f-f046-4b59-b5ea-83c419f09fba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.268142ms May 12 16:27:31.430: INFO: Pod "pod-configmaps-632b3c9f-f046-4b59-b5ea-83c419f09fba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011645011s May 12 16:27:33.616: INFO: Pod "pod-configmaps-632b3c9f-f046-4b59-b5ea-83c419f09fba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197473158s May 12 16:27:36.268: INFO: Pod "pod-configmaps-632b3c9f-f046-4b59-b5ea-83c419f09fba": Phase="Running", Reason="", readiness=true. Elapsed: 6.849692525s May 12 16:27:38.520: INFO: Pod "pod-configmaps-632b3c9f-f046-4b59-b5ea-83c419f09fba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.10198411s STEP: Saw pod success May 12 16:27:38.520: INFO: Pod "pod-configmaps-632b3c9f-f046-4b59-b5ea-83c419f09fba" satisfied condition "success or failure" May 12 16:27:38.554: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-632b3c9f-f046-4b59-b5ea-83c419f09fba container env-test: STEP: delete the pod May 12 16:27:38.863: INFO: Waiting for pod pod-configmaps-632b3c9f-f046-4b59-b5ea-83c419f09fba to disappear May 12 16:27:38.873: INFO: Pod pod-configmaps-632b3c9f-f046-4b59-b5ea-83c419f09fba no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:27:38.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5653" for this suite. • [SLOW TEST:10.133 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":925,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:27:38.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 16:27:40.457: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 16:27:42.467: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897660, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897660, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897660, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897660, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 16:27:44.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897660, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897660, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897660, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724897660, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 16:27:47.670: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 16:27:47.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9183-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:27:49.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5032" for this suite. STEP: Destroying namespace "webhook-5032-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.306 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":52,"skipped":927,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:27:51.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:28:52.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2963" for this suite. • [SLOW TEST:61.064 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":941,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:28:52.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-865beb25-d669-4b43-aa6a-421022c4593b STEP: Creating a pod to test consume secrets May 12 16:28:53.702: INFO: Waiting up to 5m0s for pod "pod-secrets-f74032a5-45e6-4b93-8ef5-1ad938dd614d" in namespace "secrets-7852" to be "success or failure" May 12 16:28:53.705: INFO: Pod "pod-secrets-f74032a5-45e6-4b93-8ef5-1ad938dd614d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.445221ms May 12 16:28:55.708: INFO: Pod "pod-secrets-f74032a5-45e6-4b93-8ef5-1ad938dd614d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006297438s May 12 16:28:57.796: INFO: Pod "pod-secrets-f74032a5-45e6-4b93-8ef5-1ad938dd614d": Phase="Running", Reason="", readiness=true. Elapsed: 4.094655508s May 12 16:28:59.800: INFO: Pod "pod-secrets-f74032a5-45e6-4b93-8ef5-1ad938dd614d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.098587716s STEP: Saw pod success May 12 16:28:59.800: INFO: Pod "pod-secrets-f74032a5-45e6-4b93-8ef5-1ad938dd614d" satisfied condition "success or failure" May 12 16:28:59.803: INFO: Trying to get logs from node jerma-worker pod pod-secrets-f74032a5-45e6-4b93-8ef5-1ad938dd614d container secret-volume-test: STEP: delete the pod May 12 16:28:59.836: INFO: Waiting for pod pod-secrets-f74032a5-45e6-4b93-8ef5-1ad938dd614d to disappear May 12 16:28:59.848: INFO: Pod pod-secrets-f74032a5-45e6-4b93-8ef5-1ad938dd614d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:28:59.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7852" for this suite. • [SLOW TEST:7.550 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":941,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:28:59.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 12 16:28:59.977: INFO: Waiting up to 5m0s for pod "downward-api-f92d26c0-6d31-479f-8203-b1e7e082744f" in namespace "downward-api-4491" to be "success or failure" May 12 16:29:00.132: INFO: Pod "downward-api-f92d26c0-6d31-479f-8203-b1e7e082744f": Phase="Pending", Reason="", readiness=false. Elapsed: 154.37285ms May 12 16:29:02.135: INFO: Pod "downward-api-f92d26c0-6d31-479f-8203-b1e7e082744f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15788157s May 12 16:29:04.140: INFO: Pod "downward-api-f92d26c0-6d31-479f-8203-b1e7e082744f": Phase="Running", Reason="", readiness=true. Elapsed: 4.163324096s May 12 16:29:06.145: INFO: Pod "downward-api-f92d26c0-6d31-479f-8203-b1e7e082744f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.167720101s STEP: Saw pod success May 12 16:29:06.145: INFO: Pod "downward-api-f92d26c0-6d31-479f-8203-b1e7e082744f" satisfied condition "success or failure" May 12 16:29:06.148: INFO: Trying to get logs from node jerma-worker pod downward-api-f92d26c0-6d31-479f-8203-b1e7e082744f container dapi-container: STEP: delete the pod May 12 16:29:06.391: INFO: Waiting for pod downward-api-f92d26c0-6d31-479f-8203-b1e7e082744f to disappear May 12 16:29:06.663: INFO: Pod downward-api-f92d26c0-6d31-479f-8203-b1e7e082744f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:29:06.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4491" for this suite. • [SLOW TEST:6.818 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":948,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:29:06.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-6d9cac4e-c7ee-4ae2-8079-43c3dc35a4a3 STEP: Creating a pod to test consume configMaps May 12 16:29:08.033: INFO: Waiting up to 5m0s for pod "pod-configmaps-4be9eb29-6d48-465b-8dae-d41d9df8fcca" in namespace "configmap-8346" to be "success or failure" May 12 16:29:08.331: INFO: Pod "pod-configmaps-4be9eb29-6d48-465b-8dae-d41d9df8fcca": Phase="Pending", Reason="", readiness=false. Elapsed: 298.321702ms May 12 16:29:11.360: INFO: Pod "pod-configmaps-4be9eb29-6d48-465b-8dae-d41d9df8fcca": Phase="Pending", Reason="", readiness=false. Elapsed: 3.327253325s May 12 16:29:13.540: INFO: Pod "pod-configmaps-4be9eb29-6d48-465b-8dae-d41d9df8fcca": Phase="Pending", Reason="", readiness=false. Elapsed: 5.506889403s May 12 16:29:15.756: INFO: Pod "pod-configmaps-4be9eb29-6d48-465b-8dae-d41d9df8fcca": Phase="Pending", Reason="", readiness=false. Elapsed: 7.723266359s May 12 16:29:18.447: INFO: Pod "pod-configmaps-4be9eb29-6d48-465b-8dae-d41d9df8fcca": Phase="Pending", Reason="", readiness=false. Elapsed: 10.414066085s May 12 16:29:20.701: INFO: Pod "pod-configmaps-4be9eb29-6d48-465b-8dae-d41d9df8fcca": Phase="Pending", Reason="", readiness=false. Elapsed: 12.668533769s May 12 16:29:22.785: INFO: Pod "pod-configmaps-4be9eb29-6d48-465b-8dae-d41d9df8fcca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.752482113s STEP: Saw pod success May 12 16:29:22.785: INFO: Pod "pod-configmaps-4be9eb29-6d48-465b-8dae-d41d9df8fcca" satisfied condition "success or failure" May 12 16:29:22.787: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-4be9eb29-6d48-465b-8dae-d41d9df8fcca container configmap-volume-test: STEP: delete the pod May 12 16:29:23.240: INFO: Waiting for pod pod-configmaps-4be9eb29-6d48-465b-8dae-d41d9df8fcca to disappear May 12 16:29:23.283: INFO: Pod pod-configmaps-4be9eb29-6d48-465b-8dae-d41d9df8fcca no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:29:23.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8346" for this suite. • [SLOW TEST:17.316 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":950,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:29:23.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 12 16:29:33.884: INFO: Successfully updated pod "adopt-release-g9mgv" STEP: Checking that the Job readopts the Pod May 12 16:29:33.884: INFO: Waiting up to 15m0s for pod "adopt-release-g9mgv" in namespace "job-8738" to be "adopted" May 12 16:29:33.891: INFO: Pod "adopt-release-g9mgv": Phase="Running", Reason="", readiness=true. Elapsed: 6.906217ms May 12 16:29:35.894: INFO: Pod "adopt-release-g9mgv": Phase="Running", Reason="", readiness=true. Elapsed: 2.009579732s May 12 16:29:35.894: INFO: Pod "adopt-release-g9mgv" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 12 16:29:36.402: INFO: Successfully updated pod "adopt-release-g9mgv" STEP: Checking that the Job releases the Pod May 12 16:29:36.402: INFO: Waiting up to 15m0s for pod "adopt-release-g9mgv" in namespace "job-8738" to be "released" May 12 16:29:36.820: INFO: Pod "adopt-release-g9mgv": Phase="Running", Reason="", readiness=true. Elapsed: 417.148483ms May 12 16:29:36.820: INFO: Pod "adopt-release-g9mgv" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:29:36.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-8738" for this suite. • [SLOW TEST:14.693 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":57,"skipped":978,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:29:38.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 12 16:29:40.109: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 16:29:41.331: INFO: Waiting for terminating namespaces to be deleted... May 12 16:29:41.333: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 12 16:29:41.784: INFO: adopt-release-jp9wp from job-8738 started at 2020-05-12 16:29:38 +0000 UTC (1 container statuses recorded) May 12 16:29:41.784: INFO: Container c ready: false, restart count 0 May 12 16:29:41.784: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:29:41.784: INFO: Container kube-proxy ready: true, restart count 0 May 12 16:29:41.784: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:29:41.784: INFO: Container kindnet-cni ready: true, restart count 0 May 12 16:29:41.784: INFO: adopt-release-g9mgv from job-8738 started at 2020-05-12 16:29:26 +0000 UTC (1 container statuses recorded) May 12 16:29:41.784: INFO: Container c ready: true, restart count 0 May 12 16:29:41.784: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 12 16:29:41.835: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:29:41.835: INFO: Container kindnet-cni ready: true, restart count 0 May 12 16:29:41.835: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 12 16:29:41.835: INFO: Container kube-bench ready: false, restart count 0 May 12 16:29:41.835: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:29:41.835: INFO: Container kube-proxy ready: true, restart count 0 May 12 16:29:41.835: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 12 16:29:41.835: INFO: Container kube-hunter ready: false, restart count 0 May 12 16:29:41.835: INFO: adopt-release-gdbsg from job-8738 started at 2020-05-12 16:29:25 +0000 UTC (1 container statuses recorded) May 12 16:29:41.835: INFO: Container c ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-07e298c1-00e8-4d5b-acde-8cd01f7c991a 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-07e298c1-00e8-4d5b-acde-8cd01f7c991a off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-07e298c1-00e8-4d5b-acde-8cd01f7c991a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:30:21.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3218" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:42.652 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":58,"skipped":997,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:30:21.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments May 12 16:30:21.905: INFO: Waiting up to 5m0s for pod "client-containers-fbdd55c2-c6af-4f0a-97d1-42224e175e66" in namespace "containers-843" to be "success or failure" May 12 16:30:21.911: INFO: Pod "client-containers-fbdd55c2-c6af-4f0a-97d1-42224e175e66": Phase="Pending", Reason="", readiness=false. Elapsed: 6.01197ms May 12 16:30:24.044: INFO: Pod "client-containers-fbdd55c2-c6af-4f0a-97d1-42224e175e66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.138170904s May 12 16:30:26.047: INFO: Pod "client-containers-fbdd55c2-c6af-4f0a-97d1-42224e175e66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141347023s May 12 16:30:28.067: INFO: Pod "client-containers-fbdd55c2-c6af-4f0a-97d1-42224e175e66": Phase="Pending", Reason="", readiness=false. Elapsed: 6.161159289s May 12 16:30:30.172: INFO: Pod "client-containers-fbdd55c2-c6af-4f0a-97d1-42224e175e66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.266373092s STEP: Saw pod success May 12 16:30:30.172: INFO: Pod "client-containers-fbdd55c2-c6af-4f0a-97d1-42224e175e66" satisfied condition "success or failure" May 12 16:30:30.490: INFO: Trying to get logs from node jerma-worker pod client-containers-fbdd55c2-c6af-4f0a-97d1-42224e175e66 container test-container: STEP: delete the pod May 12 16:30:31.120: INFO: Waiting for pod client-containers-fbdd55c2-c6af-4f0a-97d1-42224e175e66 to disappear May 12 16:30:31.130: INFO: Pod client-containers-fbdd55c2-c6af-4f0a-97d1-42224e175e66 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:30:31.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-843" for this suite. • [SLOW TEST:9.870 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":1032,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:30:31.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 12 16:30:31.459: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 16:30:31.687: INFO: Waiting for terminating namespaces to be deleted... May 12 16:30:31.788: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 12 16:30:31.793: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:30:31.793: INFO: Container kindnet-cni ready: true, restart count 0 May 12 16:30:31.793: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:30:31.793: INFO: Container kube-proxy ready: true, restart count 0 May 12 16:30:31.793: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 12 16:30:31.922: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:30:31.922: INFO: Container kindnet-cni ready: true, restart count 0 May 12 16:30:31.922: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 12 16:30:31.922: INFO: Container kube-bench ready: false, restart count 0 May 12 16:30:31.922: INFO: pod2 from sched-pred-3218 started at 2020-05-12 16:30:08 +0000 UTC (1 container statuses recorded) May 12 16:30:31.922: INFO: Container pod2 ready: false, restart count 0 May 12 16:30:31.922: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:30:31.922: INFO: Container kube-proxy ready: true, restart count 0 May 12 16:30:31.922: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 12 16:30:31.922: INFO: Container kube-hunter ready: false, restart count 0 May 12 16:30:31.922: INFO: pod1 from sched-pred-3218 started at 2020-05-12 16:29:57 +0000 UTC (1 container statuses recorded) May 12 16:30:31.922: INFO: Container pod1 ready: false, restart count 0 May 12 16:30:31.922: INFO: pod3 from sched-pred-3218 started at 2020-05-12 16:30:15 +0000 UTC (1 container statuses recorded) May 12 16:30:31.922: INFO: Container pod3 ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4e186300-6481-4adc-8c6e-7884dfe3d6b0 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-4e186300-6481-4adc-8c6e-7884dfe3d6b0 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-4e186300-6481-4adc-8c6e-7884dfe3d6b0 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:35:44.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1076" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:313.686 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":60,"skipped":1032,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:35:44.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 12 16:35:45.185: INFO: Waiting up to 5m0s for pod "pod-c511ec56-3fc5-40c9-a5a9-8ac9ad3bcb22" in namespace "emptydir-5787" to be "success or failure" May 12 16:35:45.231: INFO: Pod "pod-c511ec56-3fc5-40c9-a5a9-8ac9ad3bcb22": Phase="Pending", Reason="", readiness=false. Elapsed: 45.7479ms May 12 16:35:47.354: INFO: Pod "pod-c511ec56-3fc5-40c9-a5a9-8ac9ad3bcb22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169146923s May 12 16:35:49.357: INFO: Pod "pod-c511ec56-3fc5-40c9-a5a9-8ac9ad3bcb22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171920239s May 12 16:35:51.392: INFO: Pod "pod-c511ec56-3fc5-40c9-a5a9-8ac9ad3bcb22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.207238677s STEP: Saw pod success May 12 16:35:51.392: INFO: Pod "pod-c511ec56-3fc5-40c9-a5a9-8ac9ad3bcb22" satisfied condition "success or failure" May 12 16:35:51.509: INFO: Trying to get logs from node jerma-worker2 pod pod-c511ec56-3fc5-40c9-a5a9-8ac9ad3bcb22 container test-container: STEP: delete the pod May 12 16:35:51.775: INFO: Waiting for pod pod-c511ec56-3fc5-40c9-a5a9-8ac9ad3bcb22 to disappear May 12 16:35:51.856: INFO: Pod pod-c511ec56-3fc5-40c9-a5a9-8ac9ad3bcb22 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:35:51.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5787" for this suite. • [SLOW TEST:7.054 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":1033,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:35:51.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-dfdd9c8b-171d-43c3-bcea-41f090a0abed STEP: Creating secret with name s-test-opt-upd-3415538f-b44a-4c29-9ffc-151f424c80e3 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-dfdd9c8b-171d-43c3-bcea-41f090a0abed STEP: Updating secret s-test-opt-upd-3415538f-b44a-4c29-9ffc-151f424c80e3 STEP: Creating secret with name s-test-opt-create-88881ed8-bcda-4c8e-9f5c-a51ac489ff59 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:36:08.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3248" for this suite. • [SLOW TEST:16.926 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1047,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:36:08.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 16:36:10.551: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 16:36:13.094: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898170, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898170, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898171, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898170, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 16:36:15.259: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898170, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898170, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898171, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898170, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 16:36:17.143: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898170, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898170, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898171, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898170, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 16:36:19.234: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898170, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898170, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898171, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898170, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 16:36:22.379: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:36:35.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-678" for this suite. STEP: Destroying namespace "webhook-678-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:27.785 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":63,"skipped":1067,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:36:36.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 16:36:36.724: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:36:43.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9006" for this suite. • [SLOW TEST:6.582 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1092,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:36:43.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 12 16:36:43.932: INFO: Waiting up to 5m0s for pod "downwardapi-volume-698f7834-2032-44b2-8a0a-e1a44ebf45ef" in namespace "projected-902" to be "success or failure" May 12 16:36:43.963: INFO: Pod "downwardapi-volume-698f7834-2032-44b2-8a0a-e1a44ebf45ef": Phase="Pending", Reason="", readiness=false. Elapsed: 30.823157ms May 12 16:36:46.034: INFO: Pod "downwardapi-volume-698f7834-2032-44b2-8a0a-e1a44ebf45ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102128921s May 12 16:36:48.097: INFO: Pod "downwardapi-volume-698f7834-2032-44b2-8a0a-e1a44ebf45ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.165497014s May 12 16:36:50.101: INFO: Pod "downwardapi-volume-698f7834-2032-44b2-8a0a-e1a44ebf45ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.168986763s STEP: Saw pod success May 12 16:36:50.101: INFO: Pod "downwardapi-volume-698f7834-2032-44b2-8a0a-e1a44ebf45ef" satisfied condition "success or failure" May 12 16:36:50.103: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-698f7834-2032-44b2-8a0a-e1a44ebf45ef container client-container: STEP: delete the pod May 12 16:36:50.212: INFO: Waiting for pod downwardapi-volume-698f7834-2032-44b2-8a0a-e1a44ebf45ef to disappear May 12 16:36:50.225: INFO: Pod downwardapi-volume-698f7834-2032-44b2-8a0a-e1a44ebf45ef no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:36:50.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-902" for this suite. • [SLOW TEST:6.989 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1096,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:36:50.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-9c961743-f9b9-4d7e-ade4-7affe6e649f5 STEP: Creating a pod to test consume configMaps May 12 16:36:50.331: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-70a59ad0-bceb-403a-a888-3536be1e3f66" in namespace "projected-8573" to be "success or failure" May 12 16:36:50.334: INFO: Pod "pod-projected-configmaps-70a59ad0-bceb-403a-a888-3536be1e3f66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.745982ms May 12 16:36:52.397: INFO: Pod "pod-projected-configmaps-70a59ad0-bceb-403a-a888-3536be1e3f66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065754318s May 12 16:36:54.421: INFO: Pod "pod-projected-configmaps-70a59ad0-bceb-403a-a888-3536be1e3f66": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089696454s May 12 16:36:56.595: INFO: Pod "pod-projected-configmaps-70a59ad0-bceb-403a-a888-3536be1e3f66": Phase="Pending", Reason="", readiness=false. Elapsed: 6.264011747s May 12 16:36:58.599: INFO: Pod "pod-projected-configmaps-70a59ad0-bceb-403a-a888-3536be1e3f66": Phase="Running", Reason="", readiness=true. Elapsed: 8.26845307s May 12 16:37:00.603: INFO: Pod "pod-projected-configmaps-70a59ad0-bceb-403a-a888-3536be1e3f66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.272332869s STEP: Saw pod success May 12 16:37:00.603: INFO: Pod "pod-projected-configmaps-70a59ad0-bceb-403a-a888-3536be1e3f66" satisfied condition "success or failure" May 12 16:37:00.607: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-70a59ad0-bceb-403a-a888-3536be1e3f66 container projected-configmap-volume-test: STEP: delete the pod May 12 16:37:00.630: INFO: Waiting for pod pod-projected-configmaps-70a59ad0-bceb-403a-a888-3536be1e3f66 to disappear May 12 16:37:00.654: INFO: Pod pod-projected-configmaps-70a59ad0-bceb-403a-a888-3536be1e3f66 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:37:00.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8573" for this suite. • [SLOW TEST:10.429 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1154,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:37:00.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5244.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5244.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5244.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5244.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5244.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5244.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 16:37:11.956: INFO: DNS probes using dns-5244/dns-test-4e82c549-5443-4938-8256-93d79cd07e60 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:37:12.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5244" for this suite. • [SLOW TEST:11.774 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":67,"skipped":1193,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:37:12.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-5c7005be-cb39-476f-9d3a-14b916aab84e STEP: Creating a pod to test consume configMaps May 12 16:37:12.611: INFO: Waiting up to 5m0s for pod "pod-configmaps-46ff696d-1817-4660-9819-129f3defed31" in namespace "configmap-1805" to be "success or failure" May 12 16:37:12.615: INFO: Pod "pod-configmaps-46ff696d-1817-4660-9819-129f3defed31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057529ms May 12 16:37:14.706: INFO: Pod "pod-configmaps-46ff696d-1817-4660-9819-129f3defed31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094642177s May 12 16:37:16.708: INFO: Pod "pod-configmaps-46ff696d-1817-4660-9819-129f3defed31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.096959352s May 12 16:37:18.799: INFO: Pod "pod-configmaps-46ff696d-1817-4660-9819-129f3defed31": Phase="Pending", Reason="", readiness=false. Elapsed: 6.18805297s May 12 16:37:20.838: INFO: Pod "pod-configmaps-46ff696d-1817-4660-9819-129f3defed31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.226606158s STEP: Saw pod success May 12 16:37:20.838: INFO: Pod "pod-configmaps-46ff696d-1817-4660-9819-129f3defed31" satisfied condition "success or failure" May 12 16:37:20.839: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-46ff696d-1817-4660-9819-129f3defed31 container configmap-volume-test: STEP: delete the pod May 12 16:37:20.887: INFO: Waiting for pod pod-configmaps-46ff696d-1817-4660-9819-129f3defed31 to disappear May 12 16:37:21.006: INFO: Pod pod-configmaps-46ff696d-1817-4660-9819-129f3defed31 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:37:21.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1805" for this suite. • [SLOW TEST:8.576 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1215,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:37:21.013: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 12 16:37:21.716: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2669f418-de3a-44b9-88d4-f7265984c962" in namespace "downward-api-3646" to be "success or failure" May 12 16:37:21.766: INFO: Pod "downwardapi-volume-2669f418-de3a-44b9-88d4-f7265984c962": Phase="Pending", Reason="", readiness=false. Elapsed: 50.139241ms May 12 16:37:23.769: INFO: Pod "downwardapi-volume-2669f418-de3a-44b9-88d4-f7265984c962": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052928982s May 12 16:37:25.776: INFO: Pod "downwardapi-volume-2669f418-de3a-44b9-88d4-f7265984c962": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059873621s STEP: Saw pod success May 12 16:37:25.776: INFO: Pod "downwardapi-volume-2669f418-de3a-44b9-88d4-f7265984c962" satisfied condition "success or failure" May 12 16:37:25.780: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2669f418-de3a-44b9-88d4-f7265984c962 container client-container: STEP: delete the pod May 12 16:37:25.972: INFO: Waiting for pod downwardapi-volume-2669f418-de3a-44b9-88d4-f7265984c962 to disappear May 12 16:37:26.018: INFO: Pod downwardapi-volume-2669f418-de3a-44b9-88d4-f7265984c962 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:37:26.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3646" for this suite. • [SLOW TEST:5.023 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1231,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:37:26.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 12 16:37:26.238: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c6297944-b3a3-4ddf-a1c0-425e720bfc64" in namespace "downward-api-9597" to be "success or failure" May 12 16:37:26.251: INFO: Pod "downwardapi-volume-c6297944-b3a3-4ddf-a1c0-425e720bfc64": Phase="Pending", Reason="", readiness=false. Elapsed: 12.635909ms May 12 16:37:28.263: INFO: Pod "downwardapi-volume-c6297944-b3a3-4ddf-a1c0-425e720bfc64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025117346s May 12 16:37:30.267: INFO: Pod "downwardapi-volume-c6297944-b3a3-4ddf-a1c0-425e720bfc64": Phase="Running", Reason="", readiness=true. Elapsed: 4.028246099s May 12 16:37:32.270: INFO: Pod "downwardapi-volume-c6297944-b3a3-4ddf-a1c0-425e720bfc64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031176625s STEP: Saw pod success May 12 16:37:32.270: INFO: Pod "downwardapi-volume-c6297944-b3a3-4ddf-a1c0-425e720bfc64" satisfied condition "success or failure" May 12 16:37:32.272: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-c6297944-b3a3-4ddf-a1c0-425e720bfc64 container client-container: STEP: delete the pod May 12 16:37:32.313: INFO: Waiting for pod downwardapi-volume-c6297944-b3a3-4ddf-a1c0-425e720bfc64 to disappear May 12 16:37:32.457: INFO: Pod downwardapi-volume-c6297944-b3a3-4ddf-a1c0-425e720bfc64 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:37:32.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9597" for this suite. • [SLOW TEST:6.428 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1256,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:37:32.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:37:32.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1535" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":71,"skipped":1284,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:37:32.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:37:44.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4579" for this suite. • [SLOW TEST:11.751 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":72,"skipped":1304,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:37:44.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4489 STEP: creating a selector STEP: Creating the service pods in kubernetes May 12 16:37:45.591: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 12 16:38:20.674: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.94:8080/dial?request=hostname&protocol=udp&host=10.244.1.93&port=8081&tries=1'] Namespace:pod-network-test-4489 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 16:38:20.674: INFO: >>> kubeConfig: /root/.kube/config I0512 16:38:20.701968 7 log.go:172] (0xc002184210) (0xc0028f0320) Create stream I0512 16:38:20.701995 7 log.go:172] (0xc002184210) (0xc0028f0320) Stream added, broadcasting: 1 I0512 16:38:20.703781 7 log.go:172] (0xc002184210) Reply frame received for 1 I0512 16:38:20.703808 7 log.go:172] (0xc002184210) (0xc0022880a0) Create stream I0512 16:38:20.703819 7 log.go:172] (0xc002184210) (0xc0022880a0) Stream added, broadcasting: 3 I0512 16:38:20.704725 7 log.go:172] (0xc002184210) Reply frame received for 3 I0512 16:38:20.704772 7 log.go:172] (0xc002184210) (0xc0026ef7c0) Create stream I0512 16:38:20.704830 7 log.go:172] (0xc002184210) (0xc0026ef7c0) Stream added, broadcasting: 5 I0512 16:38:20.706157 7 log.go:172] (0xc002184210) Reply frame received for 5 I0512 16:38:20.787084 7 log.go:172] (0xc002184210) Data frame received for 3 I0512 16:38:20.787105 7 log.go:172] (0xc0022880a0) (3) Data frame handling I0512 16:38:20.787125 7 log.go:172] (0xc0022880a0) (3) Data frame sent I0512 16:38:20.787663 7 log.go:172] (0xc002184210) Data frame received for 3 I0512 16:38:20.787701 7 log.go:172] (0xc0022880a0) (3) Data frame handling I0512 16:38:20.787738 7 log.go:172] (0xc002184210) Data frame received for 5 I0512 16:38:20.787760 7 log.go:172] (0xc0026ef7c0) (5) Data frame handling I0512 16:38:20.788917 7 log.go:172] (0xc002184210) Data frame received for 1 I0512 16:38:20.788938 7 log.go:172] (0xc0028f0320) (1) Data frame handling I0512 16:38:20.788954 7 log.go:172] (0xc0028f0320) (1) Data frame sent I0512 16:38:20.789082 7 log.go:172] (0xc002184210) (0xc0028f0320) Stream removed, broadcasting: 1 I0512 16:38:20.789296 7 log.go:172] (0xc002184210) (0xc0028f0320) Stream removed, broadcasting: 1 I0512 16:38:20.789311 7 log.go:172] (0xc002184210) (0xc0022880a0) Stream removed, broadcasting: 3 I0512 16:38:20.789323 7 log.go:172] (0xc002184210) (0xc0026ef7c0) Stream removed, broadcasting: 5 May 12 16:38:20.789: INFO: Waiting for responses: map[] I0512 16:38:20.789566 7 log.go:172] (0xc002184210) Go away received May 12 16:38:20.792: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.94:8080/dial?request=hostname&protocol=udp&host=10.244.2.228&port=8081&tries=1'] Namespace:pod-network-test-4489 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 16:38:20.792: INFO: >>> kubeConfig: /root/.kube/config I0512 16:38:20.819843 7 log.go:172] (0xc002402b00) (0xc0026efb80) Create stream I0512 16:38:20.819869 7 log.go:172] (0xc002402b00) (0xc0026efb80) Stream added, broadcasting: 1 I0512 16:38:20.821784 7 log.go:172] (0xc002402b00) Reply frame received for 1 I0512 16:38:20.821815 7 log.go:172] (0xc002402b00) (0xc0026efc20) Create stream I0512 16:38:20.821825 7 log.go:172] (0xc002402b00) (0xc0026efc20) Stream added, broadcasting: 3 I0512 16:38:20.822643 7 log.go:172] (0xc002402b00) Reply frame received for 3 I0512 16:38:20.822679 7 log.go:172] (0xc002402b00) (0xc0021aa0a0) Create stream I0512 16:38:20.822696 7 log.go:172] (0xc002402b00) (0xc0021aa0a0) Stream added, broadcasting: 5 I0512 16:38:20.823332 7 log.go:172] (0xc002402b00) Reply frame received for 5 I0512 16:38:20.892484 7 log.go:172] (0xc002402b00) Data frame received for 3 I0512 16:38:20.892510 7 log.go:172] (0xc0026efc20) (3) Data frame handling I0512 16:38:20.892537 7 log.go:172] (0xc0026efc20) (3) Data frame sent I0512 16:38:20.893024 7 log.go:172] (0xc002402b00) Data frame received for 5 I0512 16:38:20.893056 7 log.go:172] (0xc0021aa0a0) (5) Data frame handling I0512 16:38:20.893223 7 log.go:172] (0xc002402b00) Data frame received for 3 I0512 16:38:20.893262 7 log.go:172] (0xc0026efc20) (3) Data frame handling I0512 16:38:20.894507 7 log.go:172] (0xc002402b00) Data frame received for 1 I0512 16:38:20.894521 7 log.go:172] (0xc0026efb80) (1) Data frame handling I0512 16:38:20.894528 7 log.go:172] (0xc0026efb80) (1) Data frame sent I0512 16:38:20.894539 7 log.go:172] (0xc002402b00) (0xc0026efb80) Stream removed, broadcasting: 1 I0512 16:38:20.894575 7 log.go:172] (0xc002402b00) (0xc0026efb80) Stream removed, broadcasting: 1 I0512 16:38:20.894590 7 log.go:172] (0xc002402b00) (0xc0026efc20) Stream removed, broadcasting: 3 I0512 16:38:20.894664 7 log.go:172] (0xc002402b00) Go away received I0512 16:38:20.894764 7 log.go:172] (0xc002402b00) (0xc0021aa0a0) Stream removed, broadcasting: 5 May 12 16:38:20.894: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:38:20.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4489" for this suite. • [SLOW TEST:36.238 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1312,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:38:20.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components May 12 16:38:21.044: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 12 16:38:21.044: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4090' May 12 16:38:26.403: INFO: stderr: "" May 12 16:38:26.403: INFO: stdout: "service/agnhost-slave created\n" May 12 16:38:26.403: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 12 16:38:26.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4090' May 12 16:38:26.903: INFO: stderr: "" May 12 16:38:26.903: INFO: stdout: "service/agnhost-master created\n" May 12 16:38:26.903: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 12 16:38:26.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4090' May 12 16:38:27.840: INFO: stderr: "" May 12 16:38:27.840: INFO: stdout: "service/frontend created\n" May 12 16:38:27.840: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 12 16:38:27.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4090' May 12 16:38:28.834: INFO: stderr: "" May 12 16:38:28.834: INFO: stdout: "deployment.apps/frontend created\n" May 12 16:38:28.834: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 12 16:38:28.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4090' May 12 16:38:29.664: INFO: stderr: "" May 12 16:38:29.664: INFO: stdout: "deployment.apps/agnhost-master created\n" May 12 16:38:29.664: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 12 16:38:29.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4090' May 12 16:38:30.640: INFO: stderr: "" May 12 16:38:30.640: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 12 16:38:30.640: INFO: Waiting for all frontend pods to be Running. May 12 16:38:45.691: INFO: Waiting for frontend to serve content. May 12 16:38:45.753: INFO: Trying to add a new entry to the guestbook. May 12 16:38:45.940: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 12 16:38:45.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4090' May 12 16:38:46.432: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 16:38:46.432: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 12 16:38:46.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4090' May 12 16:38:46.939: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 16:38:46.939: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 12 16:38:46.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4090' May 12 16:38:47.252: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 16:38:47.252: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 12 16:38:47.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4090' May 12 16:38:47.397: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 16:38:47.397: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 12 16:38:47.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4090' May 12 16:38:47.559: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 16:38:47.559: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 12 16:38:47.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4090' May 12 16:38:47.912: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 16:38:47.912: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:38:47.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4090" for this suite. • [SLOW TEST:27.334 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":74,"skipped":1322,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:38:48.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-0b878c0e-05a4-4689-b38f-9399f29d4c84 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-0b878c0e-05a4-4689-b38f-9399f29d4c84 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:40:14.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2014" for this suite. • [SLOW TEST:87.127 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1327,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:40:15.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1353 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-1353 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1353 May 12 16:40:16.084: INFO: Found 0 stateful pods, waiting for 1 May 12 16:40:26.089: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 12 16:40:26.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 16:40:26.465: INFO: stderr: "I0512 16:40:26.291763 1347 log.go:172] (0xc0000d8370) (0xc0006b2000) Create stream\nI0512 16:40:26.291829 1347 log.go:172] (0xc0000d8370) (0xc0006b2000) Stream added, broadcasting: 1\nI0512 16:40:26.293461 1347 log.go:172] (0xc0000d8370) Reply frame received for 1\nI0512 16:40:26.293492 1347 log.go:172] (0xc0000d8370) (0xc0005a8dc0) Create stream\nI0512 16:40:26.293500 1347 log.go:172] (0xc0000d8370) (0xc0005a8dc0) Stream added, broadcasting: 3\nI0512 16:40:26.294293 1347 log.go:172] (0xc0000d8370) Reply frame received for 3\nI0512 16:40:26.294318 1347 log.go:172] (0xc0000d8370) (0xc0006b34a0) Create stream\nI0512 16:40:26.294326 1347 log.go:172] (0xc0000d8370) (0xc0006b34a0) Stream added, broadcasting: 5\nI0512 16:40:26.295015 1347 log.go:172] (0xc0000d8370) Reply frame received for 5\nI0512 16:40:26.343470 1347 log.go:172] (0xc0000d8370) Data frame received for 5\nI0512 16:40:26.343504 1347 log.go:172] (0xc0006b34a0) (5) Data frame handling\nI0512 16:40:26.343529 1347 log.go:172] (0xc0006b34a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 16:40:26.456839 1347 log.go:172] (0xc0000d8370) Data frame received for 3\nI0512 16:40:26.456858 1347 log.go:172] (0xc0005a8dc0) (3) Data frame handling\nI0512 16:40:26.456877 1347 log.go:172] (0xc0005a8dc0) (3) Data frame sent\nI0512 16:40:26.456887 1347 log.go:172] (0xc0000d8370) Data frame received for 3\nI0512 16:40:26.456892 1347 log.go:172] (0xc0005a8dc0) (3) Data frame handling\nI0512 16:40:26.457325 1347 log.go:172] (0xc0000d8370) Data frame received for 5\nI0512 16:40:26.457365 1347 log.go:172] (0xc0006b34a0) (5) Data frame handling\nI0512 16:40:26.459197 1347 log.go:172] (0xc0000d8370) Data frame received for 1\nI0512 16:40:26.459210 1347 log.go:172] (0xc0006b2000) (1) Data frame handling\nI0512 16:40:26.459216 1347 log.go:172] (0xc0006b2000) (1) Data frame sent\nI0512 16:40:26.459223 1347 log.go:172] (0xc0000d8370) (0xc0006b2000) Stream removed, broadcasting: 1\nI0512 16:40:26.459234 1347 log.go:172] (0xc0000d8370) Go away received\nI0512 16:40:26.459674 1347 log.go:172] (0xc0000d8370) (0xc0006b2000) Stream removed, broadcasting: 1\nI0512 16:40:26.459696 1347 log.go:172] (0xc0000d8370) (0xc0005a8dc0) Stream removed, broadcasting: 3\nI0512 16:40:26.459709 1347 log.go:172] (0xc0000d8370) (0xc0006b34a0) Stream removed, broadcasting: 5\n" May 12 16:40:26.465: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 16:40:26.465: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 16:40:26.469: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 12 16:40:36.724: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 16:40:36.725: INFO: Waiting for statefulset status.replicas updated to 0 May 12 16:40:37.324: INFO: POD NODE PHASE GRACE CONDITIONS May 12 16:40:37.324: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:26 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC }] May 12 16:40:37.324: INFO: May 12 16:40:37.324: INFO: StatefulSet ss has not reached scale 3, at 1 May 12 16:40:38.340: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.919703529s May 12 16:40:39.345: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.90293946s May 12 16:40:40.412: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.898623016s May 12 16:40:41.415: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.831105974s May 12 16:40:42.539: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.828110792s May 12 16:40:43.803: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.704728606s May 12 16:40:45.083: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.440541457s May 12 16:40:46.088: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.160363107s May 12 16:40:47.093: INFO: Verifying statefulset ss doesn't scale past 3 for another 155.393521ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1353 May 12 16:40:48.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:40:49.303: INFO: stderr: "I0512 16:40:49.179101 1364 log.go:172] (0xc0000f4a50) (0xc000703b80) Create stream\nI0512 16:40:49.179167 1364 log.go:172] (0xc0000f4a50) (0xc000703b80) Stream added, broadcasting: 1\nI0512 16:40:49.181414 1364 log.go:172] (0xc0000f4a50) Reply frame received for 1\nI0512 16:40:49.181469 1364 log.go:172] (0xc0000f4a50) (0xc000703d60) Create stream\nI0512 16:40:49.181492 1364 log.go:172] (0xc0000f4a50) (0xc000703d60) Stream added, broadcasting: 3\nI0512 16:40:49.182415 1364 log.go:172] (0xc0000f4a50) Reply frame received for 3\nI0512 16:40:49.182456 1364 log.go:172] (0xc0000f4a50) (0xc000703e00) Create stream\nI0512 16:40:49.182468 1364 log.go:172] (0xc0000f4a50) (0xc000703e00) Stream added, broadcasting: 5\nI0512 16:40:49.183240 1364 log.go:172] (0xc0000f4a50) Reply frame received for 5\nI0512 16:40:49.251686 1364 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0512 16:40:49.251718 1364 log.go:172] (0xc000703e00) (5) Data frame handling\nI0512 16:40:49.251737 1364 log.go:172] (0xc000703e00) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 16:40:49.294991 1364 log.go:172] (0xc0000f4a50) Data frame received for 3\nI0512 16:40:49.295034 1364 log.go:172] (0xc000703d60) (3) Data frame handling\nI0512 16:40:49.295073 1364 log.go:172] (0xc000703d60) (3) Data frame sent\nI0512 16:40:49.295370 1364 log.go:172] (0xc0000f4a50) Data frame received for 3\nI0512 16:40:49.295390 1364 log.go:172] (0xc000703d60) (3) Data frame handling\nI0512 16:40:49.295451 1364 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0512 16:40:49.295497 1364 log.go:172] (0xc000703e00) (5) Data frame handling\nI0512 16:40:49.297586 1364 log.go:172] (0xc0000f4a50) Data frame received for 1\nI0512 16:40:49.297623 1364 log.go:172] (0xc000703b80) (1) Data frame handling\nI0512 16:40:49.297646 1364 log.go:172] (0xc000703b80) (1) Data frame sent\nI0512 16:40:49.297675 1364 log.go:172] (0xc0000f4a50) (0xc000703b80) Stream removed, broadcasting: 1\nI0512 16:40:49.298214 1364 log.go:172] (0xc0000f4a50) (0xc000703b80) Stream removed, broadcasting: 1\nI0512 16:40:49.298249 1364 log.go:172] (0xc0000f4a50) (0xc000703d60) Stream removed, broadcasting: 3\nI0512 16:40:49.298276 1364 log.go:172] (0xc0000f4a50) (0xc000703e00) Stream removed, broadcasting: 5\n" May 12 16:40:49.304: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 16:40:49.304: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 16:40:49.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:40:49.521: INFO: stderr: "I0512 16:40:49.435576 1384 log.go:172] (0xc0000f4bb0) (0xc0009a4000) Create stream\nI0512 16:40:49.435646 1384 log.go:172] (0xc0000f4bb0) (0xc0009a4000) Stream added, broadcasting: 1\nI0512 16:40:49.439055 1384 log.go:172] (0xc0000f4bb0) Reply frame received for 1\nI0512 16:40:49.439113 1384 log.go:172] (0xc0000f4bb0) (0xc00070dc20) Create stream\nI0512 16:40:49.439131 1384 log.go:172] (0xc0000f4bb0) (0xc00070dc20) Stream added, broadcasting: 3\nI0512 16:40:49.440260 1384 log.go:172] (0xc0000f4bb0) Reply frame received for 3\nI0512 16:40:49.440306 1384 log.go:172] (0xc0000f4bb0) (0xc0009a40a0) Create stream\nI0512 16:40:49.440330 1384 log.go:172] (0xc0000f4bb0) (0xc0009a40a0) Stream added, broadcasting: 5\nI0512 16:40:49.441707 1384 log.go:172] (0xc0000f4bb0) Reply frame received for 5\nI0512 16:40:49.515266 1384 log.go:172] (0xc0000f4bb0) Data frame received for 5\nI0512 16:40:49.515336 1384 log.go:172] (0xc0009a40a0) (5) Data frame handling\nI0512 16:40:49.515366 1384 log.go:172] (0xc0009a40a0) (5) Data frame sent\nI0512 16:40:49.515386 1384 log.go:172] (0xc0000f4bb0) Data frame received for 5\nI0512 16:40:49.515395 1384 log.go:172] (0xc0009a40a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0512 16:40:49.515449 1384 log.go:172] (0xc0000f4bb0) Data frame received for 3\nI0512 16:40:49.515484 1384 log.go:172] (0xc00070dc20) (3) Data frame handling\nI0512 16:40:49.515507 1384 log.go:172] (0xc00070dc20) (3) Data frame sent\nI0512 16:40:49.515519 1384 log.go:172] (0xc0000f4bb0) Data frame received for 3\nI0512 16:40:49.515534 1384 log.go:172] (0xc00070dc20) (3) Data frame handling\nI0512 16:40:49.516609 1384 log.go:172] (0xc0000f4bb0) Data frame received for 1\nI0512 16:40:49.516636 1384 log.go:172] (0xc0009a4000) (1) Data frame handling\nI0512 16:40:49.516648 1384 log.go:172] (0xc0009a4000) (1) Data frame sent\nI0512 16:40:49.516660 1384 log.go:172] (0xc0000f4bb0) (0xc0009a4000) Stream removed, broadcasting: 1\nI0512 16:40:49.516681 1384 log.go:172] (0xc0000f4bb0) Go away received\nI0512 16:40:49.517021 1384 log.go:172] (0xc0000f4bb0) (0xc0009a4000) Stream removed, broadcasting: 1\nI0512 16:40:49.517046 1384 log.go:172] (0xc0000f4bb0) (0xc00070dc20) Stream removed, broadcasting: 3\nI0512 16:40:49.517056 1384 log.go:172] (0xc0000f4bb0) (0xc0009a40a0) Stream removed, broadcasting: 5\n" May 12 16:40:49.521: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 16:40:49.521: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 16:40:49.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:40:49.837: INFO: stderr: "I0512 16:40:49.754377 1404 log.go:172] (0xc00044ea50) (0xc0004ae1e0) Create stream\nI0512 16:40:49.754457 1404 log.go:172] (0xc00044ea50) (0xc0004ae1e0) Stream added, broadcasting: 1\nI0512 16:40:49.758434 1404 log.go:172] (0xc00044ea50) Reply frame received for 1\nI0512 16:40:49.758503 1404 log.go:172] (0xc00044ea50) (0xc0002072c0) Create stream\nI0512 16:40:49.758539 1404 log.go:172] (0xc00044ea50) (0xc0002072c0) Stream added, broadcasting: 3\nI0512 16:40:49.759653 1404 log.go:172] (0xc00044ea50) Reply frame received for 3\nI0512 16:40:49.759694 1404 log.go:172] (0xc00044ea50) (0xc0004ae320) Create stream\nI0512 16:40:49.759711 1404 log.go:172] (0xc00044ea50) (0xc0004ae320) Stream added, broadcasting: 5\nI0512 16:40:49.760702 1404 log.go:172] (0xc00044ea50) Reply frame received for 5\nI0512 16:40:49.828601 1404 log.go:172] (0xc00044ea50) Data frame received for 5\nI0512 16:40:49.828633 1404 log.go:172] (0xc0004ae320) (5) Data frame handling\nI0512 16:40:49.828650 1404 log.go:172] (0xc0004ae320) (5) Data frame sent\nI0512 16:40:49.828664 1404 log.go:172] (0xc00044ea50) Data frame received for 5\nI0512 16:40:49.828677 1404 log.go:172] (0xc0004ae320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0512 16:40:49.828703 1404 log.go:172] (0xc00044ea50) Data frame received for 3\nI0512 16:40:49.828711 1404 log.go:172] (0xc0002072c0) (3) Data frame handling\nI0512 16:40:49.828722 1404 log.go:172] (0xc0002072c0) (3) Data frame sent\nI0512 16:40:49.828735 1404 log.go:172] (0xc00044ea50) Data frame received for 3\nI0512 16:40:49.828747 1404 log.go:172] (0xc0002072c0) (3) Data frame handling\nI0512 16:40:49.831517 1404 log.go:172] (0xc00044ea50) Data frame received for 1\nI0512 16:40:49.831547 1404 log.go:172] (0xc0004ae1e0) (1) Data frame handling\nI0512 16:40:49.831564 1404 log.go:172] (0xc0004ae1e0) (1) Data frame sent\nI0512 16:40:49.831597 1404 log.go:172] (0xc00044ea50) (0xc0004ae1e0) Stream removed, broadcasting: 1\nI0512 16:40:49.831761 1404 log.go:172] (0xc00044ea50) Go away received\nI0512 16:40:49.832066 1404 log.go:172] (0xc00044ea50) (0xc0004ae1e0) Stream removed, broadcasting: 1\nI0512 16:40:49.832095 1404 log.go:172] (0xc00044ea50) (0xc0002072c0) Stream removed, broadcasting: 3\nI0512 16:40:49.832114 1404 log.go:172] (0xc00044ea50) (0xc0004ae320) Stream removed, broadcasting: 5\n" May 12 16:40:49.837: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 16:40:49.837: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 16:40:49.841: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 12 16:40:49.841: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 12 16:40:49.841: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 12 16:40:49.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 16:40:50.592: INFO: stderr: "I0512 16:40:49.974372 1427 log.go:172] (0xc0004cca50) (0xc0006c19a0) Create stream\nI0512 16:40:49.974424 1427 log.go:172] (0xc0004cca50) (0xc0006c19a0) Stream added, broadcasting: 1\nI0512 16:40:49.976520 1427 log.go:172] (0xc0004cca50) Reply frame received for 1\nI0512 16:40:49.976546 1427 log.go:172] (0xc0004cca50) (0xc000a4c000) Create stream\nI0512 16:40:49.976552 1427 log.go:172] (0xc0004cca50) (0xc000a4c000) Stream added, broadcasting: 3\nI0512 16:40:49.977713 1427 log.go:172] (0xc0004cca50) Reply frame received for 3\nI0512 16:40:49.977746 1427 log.go:172] (0xc0004cca50) (0xc000a7e000) Create stream\nI0512 16:40:49.977754 1427 log.go:172] (0xc0004cca50) (0xc000a7e000) Stream added, broadcasting: 5\nI0512 16:40:49.978769 1427 log.go:172] (0xc0004cca50) Reply frame received for 5\nI0512 16:40:50.041767 1427 log.go:172] (0xc0004cca50) Data frame received for 5\nI0512 16:40:50.041794 1427 log.go:172] (0xc000a7e000) (5) Data frame handling\nI0512 16:40:50.041820 1427 log.go:172] (0xc000a7e000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 16:40:50.587083 1427 log.go:172] (0xc0004cca50) Data frame received for 3\nI0512 16:40:50.587122 1427 log.go:172] (0xc000a4c000) (3) Data frame handling\nI0512 16:40:50.587149 1427 log.go:172] (0xc000a4c000) (3) Data frame sent\nI0512 16:40:50.587411 1427 log.go:172] (0xc0004cca50) Data frame received for 5\nI0512 16:40:50.587427 1427 log.go:172] (0xc000a7e000) (5) Data frame handling\nI0512 16:40:50.587446 1427 log.go:172] (0xc0004cca50) Data frame received for 3\nI0512 16:40:50.587455 1427 log.go:172] (0xc000a4c000) (3) Data frame handling\nI0512 16:40:50.588252 1427 log.go:172] (0xc0004cca50) Data frame received for 1\nI0512 16:40:50.588265 1427 log.go:172] (0xc0006c19a0) (1) Data frame handling\nI0512 16:40:50.588273 1427 log.go:172] (0xc0006c19a0) (1) Data frame sent\nI0512 16:40:50.588347 1427 log.go:172] (0xc0004cca50) (0xc0006c19a0) Stream removed, broadcasting: 1\nI0512 16:40:50.588483 1427 log.go:172] (0xc0004cca50) Go away received\nI0512 16:40:50.588713 1427 log.go:172] (0xc0004cca50) (0xc0006c19a0) Stream removed, broadcasting: 1\nI0512 16:40:50.588728 1427 log.go:172] (0xc0004cca50) (0xc000a4c000) Stream removed, broadcasting: 3\nI0512 16:40:50.588737 1427 log.go:172] (0xc0004cca50) (0xc000a7e000) Stream removed, broadcasting: 5\n" May 12 16:40:50.592: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 16:40:50.592: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 16:40:50.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 16:40:51.150: INFO: stderr: "I0512 16:40:50.701912 1448 log.go:172] (0xc000710a50) (0xc0006f81e0) Create stream\nI0512 16:40:50.701957 1448 log.go:172] (0xc000710a50) (0xc0006f81e0) Stream added, broadcasting: 1\nI0512 16:40:50.703923 1448 log.go:172] (0xc000710a50) Reply frame received for 1\nI0512 16:40:50.703974 1448 log.go:172] (0xc000710a50) (0xc0006d0000) Create stream\nI0512 16:40:50.703997 1448 log.go:172] (0xc000710a50) (0xc0006d0000) Stream added, broadcasting: 3\nI0512 16:40:50.704815 1448 log.go:172] (0xc000710a50) Reply frame received for 3\nI0512 16:40:50.704856 1448 log.go:172] (0xc000710a50) (0xc00067d9a0) Create stream\nI0512 16:40:50.704867 1448 log.go:172] (0xc000710a50) (0xc00067d9a0) Stream added, broadcasting: 5\nI0512 16:40:50.705632 1448 log.go:172] (0xc000710a50) Reply frame received for 5\nI0512 16:40:50.766339 1448 log.go:172] (0xc000710a50) Data frame received for 5\nI0512 16:40:50.766363 1448 log.go:172] (0xc00067d9a0) (5) Data frame handling\nI0512 16:40:50.766379 1448 log.go:172] (0xc00067d9a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 16:40:51.142215 1448 log.go:172] (0xc000710a50) Data frame received for 3\nI0512 16:40:51.142242 1448 log.go:172] (0xc0006d0000) (3) Data frame handling\nI0512 16:40:51.142256 1448 log.go:172] (0xc0006d0000) (3) Data frame sent\nI0512 16:40:51.142430 1448 log.go:172] (0xc000710a50) Data frame received for 3\nI0512 16:40:51.142440 1448 log.go:172] (0xc0006d0000) (3) Data frame handling\nI0512 16:40:51.142807 1448 log.go:172] (0xc000710a50) Data frame received for 5\nI0512 16:40:51.142827 1448 log.go:172] (0xc00067d9a0) (5) Data frame handling\nI0512 16:40:51.144996 1448 log.go:172] (0xc000710a50) Data frame received for 1\nI0512 16:40:51.145012 1448 log.go:172] (0xc0006f81e0) (1) Data frame handling\nI0512 16:40:51.145027 1448 log.go:172] (0xc0006f81e0) (1) Data frame sent\nI0512 16:40:51.145048 1448 log.go:172] (0xc000710a50) (0xc0006f81e0) Stream removed, broadcasting: 1\nI0512 16:40:51.145066 1448 log.go:172] (0xc000710a50) Go away received\nI0512 16:40:51.145818 1448 log.go:172] (0xc000710a50) (0xc0006f81e0) Stream removed, broadcasting: 1\nI0512 16:40:51.145859 1448 log.go:172] (0xc000710a50) (0xc0006d0000) Stream removed, broadcasting: 3\nI0512 16:40:51.145875 1448 log.go:172] (0xc000710a50) (0xc00067d9a0) Stream removed, broadcasting: 5\n" May 12 16:40:51.150: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 16:40:51.150: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 16:40:51.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 16:40:52.125: INFO: stderr: "I0512 16:40:51.682977 1469 log.go:172] (0xc000bcaf20) (0xc000bb8320) Create stream\nI0512 16:40:51.683012 1469 log.go:172] (0xc000bcaf20) (0xc000bb8320) Stream added, broadcasting: 1\nI0512 16:40:51.684193 1469 log.go:172] (0xc000bcaf20) Reply frame received for 1\nI0512 16:40:51.684219 1469 log.go:172] (0xc000bcaf20) (0xc000b900a0) Create stream\nI0512 16:40:51.684226 1469 log.go:172] (0xc000bcaf20) (0xc000b900a0) Stream added, broadcasting: 3\nI0512 16:40:51.685039 1469 log.go:172] (0xc000bcaf20) Reply frame received for 3\nI0512 16:40:51.685077 1469 log.go:172] (0xc000bcaf20) (0xc000bb83c0) Create stream\nI0512 16:40:51.685088 1469 log.go:172] (0xc000bcaf20) (0xc000bb83c0) Stream added, broadcasting: 5\nI0512 16:40:51.685967 1469 log.go:172] (0xc000bcaf20) Reply frame received for 5\nI0512 16:40:51.740736 1469 log.go:172] (0xc000bcaf20) Data frame received for 5\nI0512 16:40:51.740762 1469 log.go:172] (0xc000bb83c0) (5) Data frame handling\nI0512 16:40:51.740797 1469 log.go:172] (0xc000bb83c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 16:40:52.118179 1469 log.go:172] (0xc000bcaf20) Data frame received for 3\nI0512 16:40:52.118219 1469 log.go:172] (0xc000b900a0) (3) Data frame handling\nI0512 16:40:52.118231 1469 log.go:172] (0xc000b900a0) (3) Data frame sent\nI0512 16:40:52.118275 1469 log.go:172] (0xc000bcaf20) Data frame received for 5\nI0512 16:40:52.118305 1469 log.go:172] (0xc000bb83c0) (5) Data frame handling\nI0512 16:40:52.118495 1469 log.go:172] (0xc000bcaf20) Data frame received for 3\nI0512 16:40:52.118511 1469 log.go:172] (0xc000b900a0) (3) Data frame handling\nI0512 16:40:52.119934 1469 log.go:172] (0xc000bcaf20) Data frame received for 1\nI0512 16:40:52.119949 1469 log.go:172] (0xc000bb8320) (1) Data frame handling\nI0512 16:40:52.119960 1469 log.go:172] (0xc000bb8320) (1) Data frame sent\nI0512 16:40:52.120129 1469 log.go:172] (0xc000bcaf20) (0xc000bb8320) Stream removed, broadcasting: 1\nI0512 16:40:52.120314 1469 log.go:172] (0xc000bcaf20) Go away received\nI0512 16:40:52.120368 1469 log.go:172] (0xc000bcaf20) (0xc000bb8320) Stream removed, broadcasting: 1\nI0512 16:40:52.120384 1469 log.go:172] (0xc000bcaf20) (0xc000b900a0) Stream removed, broadcasting: 3\nI0512 16:40:52.120393 1469 log.go:172] (0xc000bcaf20) (0xc000bb83c0) Stream removed, broadcasting: 5\n" May 12 16:40:52.125: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 16:40:52.125: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 16:40:52.125: INFO: Waiting for statefulset status.replicas updated to 0 May 12 16:40:52.205: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 12 16:41:02.214: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 16:41:02.214: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 12 16:41:02.214: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 12 16:41:02.389: INFO: POD NODE PHASE GRACE CONDITIONS May 12 16:41:02.389: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC }] May 12 16:41:02.389: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:02.390: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:02.390: INFO: May 12 16:41:02.390: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 16:41:03.786: INFO: POD NODE PHASE GRACE CONDITIONS May 12 16:41:03.786: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC }] May 12 16:41:03.786: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:03.786: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:03.786: INFO: May 12 16:41:03.786: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 16:41:04.803: INFO: POD NODE PHASE GRACE CONDITIONS May 12 16:41:04.803: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC }] May 12 16:41:04.803: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:04.803: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:04.803: INFO: May 12 16:41:04.803: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 16:41:05.808: INFO: POD NODE PHASE GRACE CONDITIONS May 12 16:41:05.808: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC }] May 12 16:41:05.808: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:05.808: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:05.808: INFO: May 12 16:41:05.808: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 16:41:07.001: INFO: POD NODE PHASE GRACE CONDITIONS May 12 16:41:07.001: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC }] May 12 16:41:07.001: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:07.002: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:07.002: INFO: May 12 16:41:07.002: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 16:41:08.005: INFO: POD NODE PHASE GRACE CONDITIONS May 12 16:41:08.005: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC }] May 12 16:41:08.006: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:08.006: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:08.006: INFO: May 12 16:41:08.006: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 16:41:09.011: INFO: POD NODE PHASE GRACE CONDITIONS May 12 16:41:09.011: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC }] May 12 16:41:09.011: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:09.011: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:09.011: INFO: May 12 16:41:09.011: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 16:41:10.015: INFO: POD NODE PHASE GRACE CONDITIONS May 12 16:41:10.016: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC }] May 12 16:41:10.016: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:10.016: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:10.016: INFO: May 12 16:41:10.016: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 16:41:11.020: INFO: POD NODE PHASE GRACE CONDITIONS May 12 16:41:11.020: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC }] May 12 16:41:11.020: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:11.020: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:11.020: INFO: May 12 16:41:11.020: INFO: StatefulSet ss has not reached scale 0, at 3 May 12 16:41:12.024: INFO: POD NODE PHASE GRACE CONDITIONS May 12 16:41:12.024: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:16 +0000 UTC }] May 12 16:41:12.024: INFO: ss-1 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:12.025: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-12 16:40:37 +0000 UTC }] May 12 16:41:12.025: INFO: May 12 16:41:12.025: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1353 May 12 16:41:13.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:41:13.161: INFO: rc: 1 May 12 16:41:13.161: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 12 16:41:23.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:41:23.257: INFO: rc: 1 May 12 16:41:23.258: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:41:33.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:41:33.497: INFO: rc: 1 May 12 16:41:33.497: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:41:43.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:41:43.644: INFO: rc: 1 May 12 16:41:43.644: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:41:53.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:41:53.728: INFO: rc: 1 May 12 16:41:53.728: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:42:03.728: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:42:03.805: INFO: rc: 1 May 12 16:42:03.805: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:42:13.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:42:14.094: INFO: rc: 1 May 12 16:42:14.094: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:42:24.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:42:24.180: INFO: rc: 1 May 12 16:42:24.180: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:42:34.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:42:34.609: INFO: rc: 1 May 12 16:42:34.609: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:42:44.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:42:44.695: INFO: rc: 1 May 12 16:42:44.695: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:42:54.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:42:54.797: INFO: rc: 1 May 12 16:42:54.797: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:43:04.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:43:04.896: INFO: rc: 1 May 12 16:43:04.896: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:43:14.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:43:14.985: INFO: rc: 1 May 12 16:43:14.985: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:43:24.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:43:25.078: INFO: rc: 1 May 12 16:43:25.078: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:43:35.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:43:35.180: INFO: rc: 1 May 12 16:43:35.180: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:43:45.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:43:45.778: INFO: rc: 1 May 12 16:43:45.778: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:43:55.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:43:55.914: INFO: rc: 1 May 12 16:43:55.914: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:44:05.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:44:06.082: INFO: rc: 1 May 12 16:44:06.082: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:44:16.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:44:16.669: INFO: rc: 1 May 12 16:44:16.669: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:44:26.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:44:26.783: INFO: rc: 1 May 12 16:44:26.783: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:44:36.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:44:36.901: INFO: rc: 1 May 12 16:44:36.901: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:44:46.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:44:47.061: INFO: rc: 1 May 12 16:44:47.061: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:44:57.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:44:57.163: INFO: rc: 1 May 12 16:44:57.164: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:45:07.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:45:07.267: INFO: rc: 1 May 12 16:45:07.267: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:45:17.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:45:17.430: INFO: rc: 1 May 12 16:45:17.430: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:45:27.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:45:27.531: INFO: rc: 1 May 12 16:45:27.532: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:45:37.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:45:37.637: INFO: rc: 1 May 12 16:45:37.637: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:45:47.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:45:47.834: INFO: rc: 1 May 12 16:45:47.834: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:45:57.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:45:57.927: INFO: rc: 1 May 12 16:45:57.927: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:46:07.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:46:08.025: INFO: rc: 1 May 12 16:46:08.025: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 12 16:46:18.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1353 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 16:46:18.210: INFO: rc: 1 May 12 16:46:18.210: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: May 12 16:46:18.210: INFO: Scaling statefulset ss to 0 May 12 16:46:18.229: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 12 16:46:18.231: INFO: Deleting all statefulset in ns statefulset-1353 May 12 16:46:18.233: INFO: Scaling statefulset ss to 0 May 12 16:46:18.239: INFO: Waiting for statefulset status.replicas updated to 0 May 12 16:46:18.241: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:46:18.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1353" for this suite. • [SLOW TEST:363.061 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":76,"skipped":1357,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:46:18.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 12 16:46:18.771: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 16:46:18.777: INFO: Waiting for terminating namespaces to be deleted... May 12 16:46:18.778: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 12 16:46:18.791: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:46:18.791: INFO: Container kindnet-cni ready: true, restart count 0 May 12 16:46:18.791: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:46:18.791: INFO: Container kube-proxy ready: true, restart count 0 May 12 16:46:18.791: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 12 16:46:18.803: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:46:18.804: INFO: Container kindnet-cni ready: true, restart count 0 May 12 16:46:18.804: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 12 16:46:18.804: INFO: Container kube-bench ready: false, restart count 0 May 12 16:46:18.804: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:46:18.804: INFO: Container kube-proxy ready: true, restart count 0 May 12 16:46:18.804: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 12 16:46:18.804: INFO: Container kube-hunter ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 May 12 16:46:19.018: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker May 12 16:46:19.018: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 May 12 16:46:19.018: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker May 12 16:46:19.018: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 12 16:46:19.018: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 May 12 16:46:19.021: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-6ed5f246-1965-4f07-9304-9d263b9ba638.160e55babe2d6612], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9004/filler-pod-6ed5f246-1965-4f07-9304-9d263b9ba638 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-6ed5f246-1965-4f07-9304-9d263b9ba638.160e55bb1e4f61b1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-6ed5f246-1965-4f07-9304-9d263b9ba638.160e55bba7464409], Reason = [Created], Message = [Created container filler-pod-6ed5f246-1965-4f07-9304-9d263b9ba638] STEP: Considering event: Type = [Normal], Name = [filler-pod-6ed5f246-1965-4f07-9304-9d263b9ba638.160e55bbbcbd0385], Reason = [Started], Message = [Started container filler-pod-6ed5f246-1965-4f07-9304-9d263b9ba638] STEP: Considering event: Type = [Normal], Name = [filler-pod-b6b8934f-f070-44f6-8086-a22e60fa6316.160e55bac4c2602e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9004/filler-pod-b6b8934f-f070-44f6-8086-a22e60fa6316 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-b6b8934f-f070-44f6-8086-a22e60fa6316.160e55bb3a25bfaa], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-b6b8934f-f070-44f6-8086-a22e60fa6316.160e55bbc6c4e0dc], Reason = [Created], Message = [Created container filler-pod-b6b8934f-f070-44f6-8086-a22e60fa6316] STEP: Considering event: Type = [Normal], Name = [filler-pod-b6b8934f-f070-44f6-8086-a22e60fa6316.160e55bbd8fdeb41], Reason = [Started], Message = [Started container filler-pod-b6b8934f-f070-44f6-8086-a22e60fa6316] STEP: Considering event: Type = [Warning], Name = [additional-pod.160e55bc2f55779b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:46:26.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9004" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:7.963 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":77,"skipped":1371,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:46:26.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 12 16:46:26.570: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:46:34.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6107" for this suite. • [SLOW TEST:8.256 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":78,"skipped":1372,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:46:34.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-22f170a9-f8d4-4f59-97bb-1f516fb4e445 STEP: Creating a pod to test consume secrets May 12 16:46:35.352: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1ecacf34-e500-4b1e-8a3e-f2e6f5bde4f3" in namespace "projected-5411" to be "success or failure" May 12 16:46:35.438: INFO: Pod "pod-projected-secrets-1ecacf34-e500-4b1e-8a3e-f2e6f5bde4f3": Phase="Pending", Reason="", readiness=false. Elapsed: 85.968783ms May 12 16:46:37.980: INFO: Pod "pod-projected-secrets-1ecacf34-e500-4b1e-8a3e-f2e6f5bde4f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.627667578s May 12 16:46:40.376: INFO: Pod "pod-projected-secrets-1ecacf34-e500-4b1e-8a3e-f2e6f5bde4f3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.023776438s May 12 16:46:42.379: INFO: Pod "pod-projected-secrets-1ecacf34-e500-4b1e-8a3e-f2e6f5bde4f3": Phase="Running", Reason="", readiness=true. Elapsed: 7.026409832s May 12 16:46:44.425: INFO: Pod "pod-projected-secrets-1ecacf34-e500-4b1e-8a3e-f2e6f5bde4f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.072744771s STEP: Saw pod success May 12 16:46:44.425: INFO: Pod "pod-projected-secrets-1ecacf34-e500-4b1e-8a3e-f2e6f5bde4f3" satisfied condition "success or failure" May 12 16:46:44.429: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-1ecacf34-e500-4b1e-8a3e-f2e6f5bde4f3 container projected-secret-volume-test: STEP: delete the pod May 12 16:46:44.454: INFO: Waiting for pod pod-projected-secrets-1ecacf34-e500-4b1e-8a3e-f2e6f5bde4f3 to disappear May 12 16:46:44.479: INFO: Pod pod-projected-secrets-1ecacf34-e500-4b1e-8a3e-f2e6f5bde4f3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:46:44.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5411" for this suite. • [SLOW TEST:9.852 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1378,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:46:44.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 12 16:46:44.780: INFO: Waiting up to 5m0s for pod "downward-api-82f21f59-407e-475b-a90a-8d28824ea551" in namespace "downward-api-324" to be "success or failure" May 12 16:46:44.788: INFO: Pod "downward-api-82f21f59-407e-475b-a90a-8d28824ea551": Phase="Pending", Reason="", readiness=false. Elapsed: 8.101496ms May 12 16:46:46.796: INFO: Pod "downward-api-82f21f59-407e-475b-a90a-8d28824ea551": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016705862s May 12 16:46:48.898: INFO: Pod "downward-api-82f21f59-407e-475b-a90a-8d28824ea551": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11842312s May 12 16:46:50.988: INFO: Pod "downward-api-82f21f59-407e-475b-a90a-8d28824ea551": Phase="Pending", Reason="", readiness=false. Elapsed: 6.207982715s May 12 16:46:52.991: INFO: Pod "downward-api-82f21f59-407e-475b-a90a-8d28824ea551": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.21149546s STEP: Saw pod success May 12 16:46:52.991: INFO: Pod "downward-api-82f21f59-407e-475b-a90a-8d28824ea551" satisfied condition "success or failure" May 12 16:46:52.993: INFO: Trying to get logs from node jerma-worker2 pod downward-api-82f21f59-407e-475b-a90a-8d28824ea551 container dapi-container: STEP: delete the pod May 12 16:46:53.144: INFO: Waiting for pod downward-api-82f21f59-407e-475b-a90a-8d28824ea551 to disappear May 12 16:46:53.185: INFO: Pod downward-api-82f21f59-407e-475b-a90a-8d28824ea551 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:46:53.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-324" for this suite. • [SLOW TEST:8.696 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1396,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:46:53.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 12 16:46:59.180: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:46:59.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5040" for this suite. • [SLOW TEST:6.314 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1412,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:46:59.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 16:47:00.315: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 16:47:02.918: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898820, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898820, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898820, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898820, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 16:47:05.322: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898820, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898820, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898820, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898820, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 16:47:06.939: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898820, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898820, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898820, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724898820, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 16:47:10.029: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:47:10.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1607" for this suite. STEP: Destroying namespace "webhook-1607-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.320 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":82,"skipped":1417,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:47:11.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-6382, will wait for the garbage collector to delete the pods May 12 16:47:20.885: INFO: Deleting Job.batch foo took: 5.609313ms May 12 16:47:21.185: INFO: Terminating Job.batch foo pods took: 300.21886ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:47:59.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6382" for this suite. • [SLOW TEST:47.984 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":83,"skipped":1441,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:47:59.811: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 12 16:48:00.171: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fe5897a5-c70d-475e-a023-a685179b6db2" in namespace "projected-5563" to be "success or failure" May 12 16:48:00.318: INFO: Pod "downwardapi-volume-fe5897a5-c70d-475e-a023-a685179b6db2": Phase="Pending", Reason="", readiness=false. Elapsed: 147.164405ms May 12 16:48:02.321: INFO: Pod "downwardapi-volume-fe5897a5-c70d-475e-a023-a685179b6db2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150228684s May 12 16:48:04.455: INFO: Pod "downwardapi-volume-fe5897a5-c70d-475e-a023-a685179b6db2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.284240915s STEP: Saw pod success May 12 16:48:04.455: INFO: Pod "downwardapi-volume-fe5897a5-c70d-475e-a023-a685179b6db2" satisfied condition "success or failure" May 12 16:48:04.647: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-fe5897a5-c70d-475e-a023-a685179b6db2 container client-container: STEP: delete the pod May 12 16:48:04.857: INFO: Waiting for pod downwardapi-volume-fe5897a5-c70d-475e-a023-a685179b6db2 to disappear May 12 16:48:04.868: INFO: Pod downwardapi-volume-fe5897a5-c70d-475e-a023-a685179b6db2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:48:04.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5563" for this suite. • [SLOW TEST:5.087 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1445,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:48:04.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 12 16:48:05.829: INFO: created pod pod-service-account-defaultsa May 12 16:48:05.829: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 12 16:48:05.982: INFO: created pod pod-service-account-mountsa May 12 16:48:05.982: INFO: pod pod-service-account-mountsa service account token volume mount: true May 12 16:48:06.175: INFO: created pod pod-service-account-nomountsa May 12 16:48:06.175: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 12 16:48:06.405: INFO: created pod pod-service-account-defaultsa-mountspec May 12 16:48:06.405: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 12 16:48:06.439: INFO: created pod pod-service-account-mountsa-mountspec May 12 16:48:06.439: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 12 16:48:06.630: INFO: created pod pod-service-account-nomountsa-mountspec May 12 16:48:06.630: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 12 16:48:06.695: INFO: created pod pod-service-account-defaultsa-nomountspec May 12 16:48:06.695: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 12 16:48:06.881: INFO: created pod pod-service-account-mountsa-nomountspec May 12 16:48:06.881: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 12 16:48:06.947: INFO: created pod pod-service-account-nomountsa-nomountspec May 12 16:48:06.947: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:48:06.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8552" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":85,"skipped":1447,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:48:07.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 12 16:48:07.650: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:48:32.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2288" for this suite. • [SLOW TEST:24.876 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":86,"skipped":1465,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:48:32.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 12 16:48:32.280: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4340 /api/v1/namespaces/watch-4340/configmaps/e2e-watch-test-watch-closed c3da1839-2378-4ccc-b822-9f6985aad0ac 15616051 0 2020-05-12 16:48:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 16:48:32.280: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4340 /api/v1/namespaces/watch-4340/configmaps/e2e-watch-test-watch-closed c3da1839-2378-4ccc-b822-9f6985aad0ac 15616052 0 2020-05-12 16:48:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 12 16:48:32.383: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4340 /api/v1/namespaces/watch-4340/configmaps/e2e-watch-test-watch-closed c3da1839-2378-4ccc-b822-9f6985aad0ac 15616053 0 2020-05-12 16:48:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 16:48:32.383: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-4340 /api/v1/namespaces/watch-4340/configmaps/e2e-watch-test-watch-closed c3da1839-2378-4ccc-b822-9f6985aad0ac 15616054 0 2020-05-12 16:48:32 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:48:32.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4340" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":87,"skipped":1519,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:48:32.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 12 16:48:32.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8179' May 12 16:48:36.477: INFO: stderr: "" May 12 16:48:36.477: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 12 16:48:37.517: INFO: Selector matched 1 pods for map[app:agnhost] May 12 16:48:37.517: INFO: Found 0 / 1 May 12 16:48:38.510: INFO: Selector matched 1 pods for map[app:agnhost] May 12 16:48:38.510: INFO: Found 0 / 1 May 12 16:48:39.481: INFO: Selector matched 1 pods for map[app:agnhost] May 12 16:48:39.481: INFO: Found 0 / 1 May 12 16:48:40.492: INFO: Selector matched 1 pods for map[app:agnhost] May 12 16:48:40.492: INFO: Found 0 / 1 May 12 16:48:41.481: INFO: Selector matched 1 pods for map[app:agnhost] May 12 16:48:41.481: INFO: Found 1 / 1 May 12 16:48:41.481: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 12 16:48:41.483: INFO: Selector matched 1 pods for map[app:agnhost] May 12 16:48:41.483: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 12 16:48:41.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-fpnbf --namespace=kubectl-8179 -p {"metadata":{"annotations":{"x":"y"}}}' May 12 16:48:41.633: INFO: stderr: "" May 12 16:48:41.633: INFO: stdout: "pod/agnhost-master-fpnbf patched\n" STEP: checking annotations May 12 16:48:41.702: INFO: Selector matched 1 pods for map[app:agnhost] May 12 16:48:41.702: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:48:41.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8179" for this suite. • [SLOW TEST:9.315 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1432 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":88,"skipped":1525,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:48:41.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:48:41.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2386" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":89,"skipped":1544,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:48:41.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:48:42.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1710" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":90,"skipped":1561,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:48:42.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info May 12 16:48:42.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 12 16:48:42.740: INFO: stderr: "" May 12 16:48:42.740: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:48:42.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5172" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":91,"skipped":1566,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:48:42.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:48:49.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7690" for this suite. • [SLOW TEST:6.815 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1582,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:48:49.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 12 16:48:50.863: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 12 16:48:51.625: INFO: Waiting for terminating namespaces to be deleted... May 12 16:48:51.878: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 12 16:48:52.148: INFO: client-containers-5be2d05c-e022-4707-a8bf-ef8cdd463b5b from containers-7690 started at 2020-05-12 16:48:43 +0000 UTC (1 container statuses recorded) May 12 16:48:52.148: INFO: Container test-container ready: true, restart count 0 May 12 16:48:52.148: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:48:52.148: INFO: Container kindnet-cni ready: true, restart count 0 May 12 16:48:52.148: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:48:52.148: INFO: Container kube-proxy ready: true, restart count 0 May 12 16:48:52.148: INFO: agnhost-master-fpnbf from kubectl-8179 started at 2020-05-12 16:48:36 +0000 UTC (1 container statuses recorded) May 12 16:48:52.148: INFO: Container agnhost-master ready: false, restart count 0 May 12 16:48:52.148: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 12 16:48:52.333: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:48:52.333: INFO: Container kindnet-cni ready: true, restart count 0 May 12 16:48:52.333: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 12 16:48:52.333: INFO: Container kube-bench ready: false, restart count 0 May 12 16:48:52.333: INFO: pod-qos-class-2757735f-31e8-4654-945e-09eeaa86c538 from pods-2386 started at 2020-05-12 16:48:41 +0000 UTC (1 container statuses recorded) May 12 16:48:52.333: INFO: Container nginx ready: true, restart count 0 May 12 16:48:52.333: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 12 16:48:52.333: INFO: Container kube-proxy ready: true, restart count 0 May 12 16:48:52.333: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 12 16:48:52.333: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-19532ba2-b752-4a32-8a32-58e52b945dfc 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-19532ba2-b752-4a32-8a32-58e52b945dfc off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-19532ba2-b752-4a32-8a32-58e52b945dfc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:49:06.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8488" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:17.100 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":93,"skipped":1584,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:49:06.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0512 16:49:37.369545 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 16:49:37.369: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:49:37.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3226" for this suite. • [SLOW TEST:30.690 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":94,"skipped":1586,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:49:37.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-d2be089b-6d8a-41f1-8d4d-a95cde1dd403 STEP: Creating a pod to test consume configMaps May 12 16:49:37.639: INFO: Waiting up to 5m0s for pod "pod-configmaps-fed7bacf-0ba7-4bb4-8eb8-468bfff48e05" in namespace "configmap-7937" to be "success or failure" May 12 16:49:37.661: INFO: Pod "pod-configmaps-fed7bacf-0ba7-4bb4-8eb8-468bfff48e05": Phase="Pending", Reason="", readiness=false. Elapsed: 21.776237ms May 12 16:49:40.195: INFO: Pod "pod-configmaps-fed7bacf-0ba7-4bb4-8eb8-468bfff48e05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.555097824s May 12 16:49:42.198: INFO: Pod "pod-configmaps-fed7bacf-0ba7-4bb4-8eb8-468bfff48e05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.558283375s May 12 16:49:44.225: INFO: Pod "pod-configmaps-fed7bacf-0ba7-4bb4-8eb8-468bfff48e05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.585132978s STEP: Saw pod success May 12 16:49:44.225: INFO: Pod "pod-configmaps-fed7bacf-0ba7-4bb4-8eb8-468bfff48e05" satisfied condition "success or failure" May 12 16:49:44.226: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-fed7bacf-0ba7-4bb4-8eb8-468bfff48e05 container configmap-volume-test: STEP: delete the pod May 12 16:49:44.620: INFO: Waiting for pod pod-configmaps-fed7bacf-0ba7-4bb4-8eb8-468bfff48e05 to disappear May 12 16:49:44.680: INFO: Pod pod-configmaps-fed7bacf-0ba7-4bb4-8eb8-468bfff48e05 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:49:44.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7937" for this suite. • [SLOW TEST:7.310 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1665,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:49:44.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 12 16:49:45.225: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:50:01.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-343" for this suite. • [SLOW TEST:16.603 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":96,"skipped":1699,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:50:01.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-ae0ebc39-9aed-45bf-a8af-6ebdd3453eda STEP: Creating secret with name secret-projected-all-test-volume-41d4f9f5-3910-422c-900e-26db6c01efb1 STEP: Creating a pod to test Check all projections for projected volume plugin May 12 16:50:02.002: INFO: Waiting up to 5m0s for pod "projected-volume-c49473f7-58bb-4a0d-91ef-51565f45eea6" in namespace "projected-9209" to be "success or failure" May 12 16:50:02.005: INFO: Pod "projected-volume-c49473f7-58bb-4a0d-91ef-51565f45eea6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.032706ms May 12 16:50:04.248: INFO: Pod "projected-volume-c49473f7-58bb-4a0d-91ef-51565f45eea6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.24620843s May 12 16:50:06.307: INFO: Pod "projected-volume-c49473f7-58bb-4a0d-91ef-51565f45eea6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.305020046s STEP: Saw pod success May 12 16:50:06.307: INFO: Pod "projected-volume-c49473f7-58bb-4a0d-91ef-51565f45eea6" satisfied condition "success or failure" May 12 16:50:06.310: INFO: Trying to get logs from node jerma-worker pod projected-volume-c49473f7-58bb-4a0d-91ef-51565f45eea6 container projected-all-volume-test: STEP: delete the pod May 12 16:50:06.856: INFO: Waiting for pod projected-volume-c49473f7-58bb-4a0d-91ef-51565f45eea6 to disappear May 12 16:50:06.866: INFO: Pod projected-volume-c49473f7-58bb-4a0d-91ef-51565f45eea6 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:50:06.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9209" for this suite. • [SLOW TEST:5.580 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1733,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:50:06.873: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-e132e90b-26c7-4225-a008-566d3961e585 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:50:07.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5012" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":98,"skipped":1744,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:50:07.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-9041/secret-test-931599c3-b4d9-499f-966e-1431dea65b53 STEP: Creating a pod to test consume secrets May 12 16:50:07.742: INFO: Waiting up to 5m0s for pod "pod-configmaps-a8322617-3287-489f-8838-d4785616d972" in namespace "secrets-9041" to be "success or failure" May 12 16:50:08.147: INFO: Pod "pod-configmaps-a8322617-3287-489f-8838-d4785616d972": Phase="Pending", Reason="", readiness=false. Elapsed: 404.998741ms May 12 16:50:10.151: INFO: Pod "pod-configmaps-a8322617-3287-489f-8838-d4785616d972": Phase="Pending", Reason="", readiness=false. Elapsed: 2.409339541s May 12 16:50:12.176: INFO: Pod "pod-configmaps-a8322617-3287-489f-8838-d4785616d972": Phase="Pending", Reason="", readiness=false. Elapsed: 4.433911612s May 12 16:50:14.179: INFO: Pod "pod-configmaps-a8322617-3287-489f-8838-d4785616d972": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.437765847s STEP: Saw pod success May 12 16:50:14.179: INFO: Pod "pod-configmaps-a8322617-3287-489f-8838-d4785616d972" satisfied condition "success or failure" May 12 16:50:14.183: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-a8322617-3287-489f-8838-d4785616d972 container env-test: STEP: delete the pod May 12 16:50:14.553: INFO: Waiting for pod pod-configmaps-a8322617-3287-489f-8838-d4785616d972 to disappear May 12 16:50:14.679: INFO: Pod pod-configmaps-a8322617-3287-489f-8838-d4785616d972 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:50:14.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9041" for this suite. • [SLOW TEST:7.357 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":99,"skipped":1746,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:50:14.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 12 16:50:14.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7620' May 12 16:50:14.967: INFO: stderr: "" May 12 16:50:14.967: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 12 16:50:20.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7620 -o json' May 12 16:50:20.179: INFO: stderr: "" May 12 16:50:20.179: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-12T16:50:14Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-7620\",\n \"resourceVersion\": \"15616676\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7620/pods/e2e-test-httpd-pod\",\n \"uid\": \"cde45151-1d41-4735-9501-eb596bc8fb27\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-jr4rl\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-jr4rl\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-jr4rl\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T16:50:15Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T16:50:19Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T16:50:19Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-12T16:50:14Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://57bef6facc71d4d6b23a3b3c76eb00bffb78f26404e46757515ce373dde70442\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-12T16:50:19Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.248\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.248\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-12T16:50:15Z\"\n }\n}\n" STEP: replace the image in the pod May 12 16:50:20.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7620' May 12 16:50:20.952: INFO: stderr: "" May 12 16:50:20.952: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 May 12 16:50:20.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7620' May 12 16:50:39.720: INFO: stderr: "" May 12 16:50:39.720: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:50:39.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7620" for this suite. • [SLOW TEST:25.656 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":100,"skipped":1783,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:50:40.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-d61e8a46-9000-4248-a5b1-428dbc4aa6f6 STEP: Creating a pod to test consume secrets May 12 16:50:41.510: INFO: Waiting up to 5m0s for pod "pod-secrets-a6a4da0a-7f50-49f9-a0b3-693a3c74697e" in namespace "secrets-1886" to be "success or failure" May 12 16:50:41.790: INFO: Pod "pod-secrets-a6a4da0a-7f50-49f9-a0b3-693a3c74697e": Phase="Pending", Reason="", readiness=false. Elapsed: 279.186291ms May 12 16:50:43.871: INFO: Pod "pod-secrets-a6a4da0a-7f50-49f9-a0b3-693a3c74697e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.360886297s May 12 16:50:46.075: INFO: Pod "pod-secrets-a6a4da0a-7f50-49f9-a0b3-693a3c74697e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.564561665s May 12 16:50:48.078: INFO: Pod "pod-secrets-a6a4da0a-7f50-49f9-a0b3-693a3c74697e": Phase="Running", Reason="", readiness=true. Elapsed: 6.567956935s May 12 16:50:50.083: INFO: Pod "pod-secrets-a6a4da0a-7f50-49f9-a0b3-693a3c74697e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.573034492s STEP: Saw pod success May 12 16:50:50.084: INFO: Pod "pod-secrets-a6a4da0a-7f50-49f9-a0b3-693a3c74697e" satisfied condition "success or failure" May 12 16:50:50.087: INFO: Trying to get logs from node jerma-worker pod pod-secrets-a6a4da0a-7f50-49f9-a0b3-693a3c74697e container secret-volume-test: STEP: delete the pod May 12 16:50:50.136: INFO: Waiting for pod pod-secrets-a6a4da0a-7f50-49f9-a0b3-693a3c74697e to disappear May 12 16:50:50.148: INFO: Pod pod-secrets-a6a4da0a-7f50-49f9-a0b3-693a3c74697e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:50:50.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1886" for this suite. • [SLOW TEST:9.838 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1814,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:50:50.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:50:56.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1008" for this suite. • [SLOW TEST:6.294 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1825,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:50:56.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:51:00.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-200" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":103,"skipped":1842,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:51:00.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 12 16:51:01.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4085' May 12 16:51:01.227: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 16:51:01.227: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 May 12 16:51:01.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-4085' May 12 16:51:01.368: INFO: stderr: "" May 12 16:51:01.368: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:51:01.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4085" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":104,"skipped":1849,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:51:01.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 12 16:51:07.942: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 12 16:51:13.054: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:51:13.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7001" for this suite. • [SLOW TEST:11.914 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":105,"skipped":1855,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:51:13.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions May 12 16:51:14.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 12 16:51:15.791: INFO: stderr: "" May 12 16:51:15.791: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:51:15.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3233" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":106,"skipped":1864,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:51:15.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 16:51:16.032: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 12 16:51:16.192: INFO: Pod name sample-pod: Found 0 pods out of 1 May 12 16:51:22.130: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 12 16:51:26.440: INFO: Creating deployment "test-rolling-update-deployment" May 12 16:51:26.491: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 12 16:51:26.506: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 12 16:51:28.790: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 12 16:51:28.827: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899086, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899086, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899087, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899086, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 16:51:30.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899086, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899086, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899087, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899086, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 16:51:32.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899086, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899086, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899092, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899086, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 16:51:34.831: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 12 16:51:34.838: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-3373 /apis/apps/v1/namespaces/deployment-3373/deployments/test-rolling-update-deployment 958c3b79-5ed9-48b2-a3c5-ebfb2ed882a8 15617081 1 2020-05-12 16:51:26 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00082bb68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-12 16:51:26 +0000 UTC,LastTransitionTime:2020-05-12 16:51:26 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-05-12 16:51:33 +0000 UTC,LastTransitionTime:2020-05-12 16:51:26 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 12 16:51:34.841: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-3373 /apis/apps/v1/namespaces/deployment-3373/replicasets/test-rolling-update-deployment-67cf4f6444 b7f6e807-8495-4b23-9bcd-0559cde0eb02 15617065 1 2020-05-12 16:51:26 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 958c3b79-5ed9-48b2-a3c5-ebfb2ed882a8 0xc0022b8147 0xc0022b8148}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0022b81b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 12 16:51:34.841: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 12 16:51:34.841: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-3373 /apis/apps/v1/namespaces/deployment-3373/replicasets/test-rolling-update-controller f3220b00-2d67-48bb-b5ff-d938d3a8eb8a 15617079 2 2020-05-12 16:51:16 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 958c3b79-5ed9-48b2-a3c5-ebfb2ed882a8 0xc0022b8077 0xc0022b8078}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0022b80d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 12 16:51:34.843: INFO: Pod "test-rolling-update-deployment-67cf4f6444-ftnwb" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-ftnwb test-rolling-update-deployment-67cf4f6444- deployment-3373 /api/v1/namespaces/deployment-3373/pods/test-rolling-update-deployment-67cf4f6444-ftnwb eba076e8-ae1f-4a08-83f1-fd64f864f825 15617064 0 2020-05-12 16:51:26 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 b7f6e807-8495-4b23-9bcd-0559cde0eb02 0xc003f866b7 0xc003f866b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-cxg6q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-cxg6q,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-cxg6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 16:51:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 16:51:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 16:51:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 16:51:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.120,StartTime:2020-05-12 16:51:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 16:51:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://778ff3e517daaf961124be9ba9a60a1ff2973b08185a4edef7c3ac36e2fef1d4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.120,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:51:34.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3373" for this suite. • [SLOW TEST:19.050 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":107,"skipped":1877,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:51:34.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 12 16:51:42.452: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:51:43.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1308" for this suite. • [SLOW TEST:8.635 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":108,"skipped":1901,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:51:43.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 12 16:51:45.822: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1190 /api/v1/namespaces/watch-1190/configmaps/e2e-watch-test-resource-version 975d1c4c-0ec2-4e23-a1d3-e5a21d0ddf9d 15617172 0 2020-05-12 16:51:44 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 16:51:45.823: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1190 /api/v1/namespaces/watch-1190/configmaps/e2e-watch-test-resource-version 975d1c4c-0ec2-4e23-a1d3-e5a21d0ddf9d 15617175 0 2020-05-12 16:51:44 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:51:45.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1190" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":109,"skipped":1910,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:51:46.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 12 16:51:49.700: INFO: Pod name wrapped-volume-race-4d817827-8d41-4606-a519-f56920ae7bc5: Found 0 pods out of 5 May 12 16:51:54.820: INFO: Pod name wrapped-volume-race-4d817827-8d41-4606-a519-f56920ae7bc5: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4d817827-8d41-4606-a519-f56920ae7bc5 in namespace emptydir-wrapper-5605, will wait for the garbage collector to delete the pods May 12 16:52:11.054: INFO: Deleting ReplicationController wrapped-volume-race-4d817827-8d41-4606-a519-f56920ae7bc5 took: 57.909267ms May 12 16:52:11.555: INFO: Terminating ReplicationController wrapped-volume-race-4d817827-8d41-4606-a519-f56920ae7bc5 pods took: 500.222869ms STEP: Creating RC which spawns configmap-volume pods May 12 16:52:31.291: INFO: Pod name wrapped-volume-race-cb3c7586-9b25-45c4-be41-0660be601f10: Found 0 pods out of 5 May 12 16:52:36.344: INFO: Pod name wrapped-volume-race-cb3c7586-9b25-45c4-be41-0660be601f10: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-cb3c7586-9b25-45c4-be41-0660be601f10 in namespace emptydir-wrapper-5605, will wait for the garbage collector to delete the pods May 12 16:52:53.395: INFO: Deleting ReplicationController wrapped-volume-race-cb3c7586-9b25-45c4-be41-0660be601f10 took: 617.349199ms May 12 16:52:54.395: INFO: Terminating ReplicationController wrapped-volume-race-cb3c7586-9b25-45c4-be41-0660be601f10 pods took: 1.000244285s STEP: Creating RC which spawns configmap-volume pods May 12 16:53:09.770: INFO: Pod name wrapped-volume-race-1010fab6-3b11-4415-831b-e3e11dc54228: Found 0 pods out of 5 May 12 16:53:14.778: INFO: Pod name wrapped-volume-race-1010fab6-3b11-4415-831b-e3e11dc54228: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1010fab6-3b11-4415-831b-e3e11dc54228 in namespace emptydir-wrapper-5605, will wait for the garbage collector to delete the pods May 12 16:53:32.877: INFO: Deleting ReplicationController wrapped-volume-race-1010fab6-3b11-4415-831b-e3e11dc54228 took: 23.018976ms May 12 16:53:33.177: INFO: Terminating ReplicationController wrapped-volume-race-1010fab6-3b11-4415-831b-e3e11dc54228 pods took: 300.369264ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:53:51.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5605" for this suite. • [SLOW TEST:126.191 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":110,"skipped":1960,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:53:52.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-rrgf STEP: Creating a pod to test atomic-volume-subpath May 12 16:53:53.066: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-rrgf" in namespace "subpath-2500" to be "success or failure" May 12 16:53:53.069: INFO: Pod "pod-subpath-test-projected-rrgf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.70953ms May 12 16:53:55.323: INFO: Pod "pod-subpath-test-projected-rrgf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.256884885s May 12 16:53:57.370: INFO: Pod "pod-subpath-test-projected-rrgf": Phase="Running", Reason="", readiness=true. Elapsed: 4.303863944s May 12 16:53:59.398: INFO: Pod "pod-subpath-test-projected-rrgf": Phase="Running", Reason="", readiness=true. Elapsed: 6.331785773s May 12 16:54:01.425: INFO: Pod "pod-subpath-test-projected-rrgf": Phase="Running", Reason="", readiness=true. Elapsed: 8.358163342s May 12 16:54:03.428: INFO: Pod "pod-subpath-test-projected-rrgf": Phase="Running", Reason="", readiness=true. Elapsed: 10.36198518s May 12 16:54:05.432: INFO: Pod "pod-subpath-test-projected-rrgf": Phase="Running", Reason="", readiness=true. Elapsed: 12.36587479s May 12 16:54:07.437: INFO: Pod "pod-subpath-test-projected-rrgf": Phase="Running", Reason="", readiness=true. Elapsed: 14.37053522s May 12 16:54:09.440: INFO: Pod "pod-subpath-test-projected-rrgf": Phase="Running", Reason="", readiness=true. Elapsed: 16.373570333s May 12 16:54:11.448: INFO: Pod "pod-subpath-test-projected-rrgf": Phase="Running", Reason="", readiness=true. Elapsed: 18.381357646s May 12 16:54:13.450: INFO: Pod "pod-subpath-test-projected-rrgf": Phase="Running", Reason="", readiness=true. Elapsed: 20.383983613s May 12 16:54:15.453: INFO: Pod "pod-subpath-test-projected-rrgf": Phase="Running", Reason="", readiness=true. Elapsed: 22.386662977s May 12 16:54:17.559: INFO: Pod "pod-subpath-test-projected-rrgf": Phase="Running", Reason="", readiness=true. Elapsed: 24.492529693s May 12 16:54:19.563: INFO: Pod "pod-subpath-test-projected-rrgf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.496442151s STEP: Saw pod success May 12 16:54:19.563: INFO: Pod "pod-subpath-test-projected-rrgf" satisfied condition "success or failure" May 12 16:54:19.566: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-rrgf container test-container-subpath-projected-rrgf: STEP: delete the pod May 12 16:54:19.608: INFO: Waiting for pod pod-subpath-test-projected-rrgf to disappear May 12 16:54:19.637: INFO: Pod pod-subpath-test-projected-rrgf no longer exists STEP: Deleting pod pod-subpath-test-projected-rrgf May 12 16:54:19.637: INFO: Deleting pod "pod-subpath-test-projected-rrgf" in namespace "subpath-2500" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:54:19.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2500" for this suite. • [SLOW TEST:27.372 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":111,"skipped":1966,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:54:19.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-48d89f25-b07d-41e2-b49e-31071e07c0d4 STEP: Creating a pod to test consume secrets May 12 16:54:19.883: INFO: Waiting up to 5m0s for pod "pod-secrets-d23f22c3-7413-422a-af38-e2279b8f3a21" in namespace "secrets-1520" to be "success or failure" May 12 16:54:19.894: INFO: Pod "pod-secrets-d23f22c3-7413-422a-af38-e2279b8f3a21": Phase="Pending", Reason="", readiness=false. Elapsed: 11.350133ms May 12 16:54:21.898: INFO: Pod "pod-secrets-d23f22c3-7413-422a-af38-e2279b8f3a21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015251603s May 12 16:54:23.934: INFO: Pod "pod-secrets-d23f22c3-7413-422a-af38-e2279b8f3a21": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051294895s May 12 16:54:25.938: INFO: Pod "pod-secrets-d23f22c3-7413-422a-af38-e2279b8f3a21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055065426s STEP: Saw pod success May 12 16:54:25.938: INFO: Pod "pod-secrets-d23f22c3-7413-422a-af38-e2279b8f3a21" satisfied condition "success or failure" May 12 16:54:25.941: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-d23f22c3-7413-422a-af38-e2279b8f3a21 container secret-volume-test: STEP: delete the pod May 12 16:54:26.038: INFO: Waiting for pod pod-secrets-d23f22c3-7413-422a-af38-e2279b8f3a21 to disappear May 12 16:54:26.056: INFO: Pod pod-secrets-d23f22c3-7413-422a-af38-e2279b8f3a21 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:54:26.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1520" for this suite. • [SLOW TEST:6.418 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1991,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:54:26.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:54:31.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7893" for this suite. • [SLOW TEST:5.489 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":113,"skipped":1991,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:54:31.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 12 16:54:31.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-3078' May 12 16:54:31.712: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 16:54:31.712: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created May 12 16:54:31.718: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 12 16:54:31.737: INFO: scanned /root for discovery docs: May 12 16:54:31.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3078' May 12 16:54:53.141: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 12 16:54:53.142: INFO: stdout: "Created e2e-test-httpd-rc-15a39b44d7afcdbe89387e2d21a92934\nScaling up e2e-test-httpd-rc-15a39b44d7afcdbe89387e2d21a92934 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-15a39b44d7afcdbe89387e2d21a92934 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-15a39b44d7afcdbe89387e2d21a92934 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" May 12 16:54:53.142: INFO: stdout: "Created e2e-test-httpd-rc-15a39b44d7afcdbe89387e2d21a92934\nScaling up e2e-test-httpd-rc-15a39b44d7afcdbe89387e2d21a92934 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-15a39b44d7afcdbe89387e2d21a92934 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-15a39b44d7afcdbe89387e2d21a92934 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. May 12 16:54:53.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-3078' May 12 16:54:53.247: INFO: stderr: "" May 12 16:54:53.247: INFO: stdout: "e2e-test-httpd-rc-15a39b44d7afcdbe89387e2d21a92934-2fsj9 " May 12 16:54:53.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-15a39b44d7afcdbe89387e2d21a92934-2fsj9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3078' May 12 16:54:53.339: INFO: stderr: "" May 12 16:54:53.339: INFO: stdout: "true" May 12 16:54:53.339: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-15a39b44d7afcdbe89387e2d21a92934-2fsj9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3078' May 12 16:54:53.430: INFO: stderr: "" May 12 16:54:53.430: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" May 12 16:54:53.430: INFO: e2e-test-httpd-rc-15a39b44d7afcdbe89387e2d21a92934-2fsj9 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 May 12 16:54:53.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-3078' May 12 16:54:53.540: INFO: stderr: "" May 12 16:54:53.540: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:54:53.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3078" for this suite. • [SLOW TEST:22.091 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":114,"skipped":1993,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:54:53.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-93ee13bd-bb6b-4694-8403-d5610910714d STEP: Creating a pod to test consume secrets May 12 16:54:54.833: INFO: Waiting up to 5m0s for pod "pod-secrets-872a9a06-2b36-4156-b331-65c6ce6c313e" in namespace "secrets-3595" to be "success or failure" May 12 16:54:55.091: INFO: Pod "pod-secrets-872a9a06-2b36-4156-b331-65c6ce6c313e": Phase="Pending", Reason="", readiness=false. Elapsed: 258.309726ms May 12 16:54:57.180: INFO: Pod "pod-secrets-872a9a06-2b36-4156-b331-65c6ce6c313e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.347027825s May 12 16:54:59.230: INFO: Pod "pod-secrets-872a9a06-2b36-4156-b331-65c6ce6c313e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.396654194s May 12 16:55:01.287: INFO: Pod "pod-secrets-872a9a06-2b36-4156-b331-65c6ce6c313e": Phase="Running", Reason="", readiness=true. Elapsed: 6.454485786s May 12 16:55:03.318: INFO: Pod "pod-secrets-872a9a06-2b36-4156-b331-65c6ce6c313e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.485010209s STEP: Saw pod success May 12 16:55:03.318: INFO: Pod "pod-secrets-872a9a06-2b36-4156-b331-65c6ce6c313e" satisfied condition "success or failure" May 12 16:55:03.321: INFO: Trying to get logs from node jerma-worker pod pod-secrets-872a9a06-2b36-4156-b331-65c6ce6c313e container secret-volume-test: STEP: delete the pod May 12 16:55:03.889: INFO: Waiting for pod pod-secrets-872a9a06-2b36-4156-b331-65c6ce6c313e to disappear May 12 16:55:03.934: INFO: Pod pod-secrets-872a9a06-2b36-4156-b331-65c6ce6c313e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:55:03.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3595" for this suite. • [SLOW TEST:10.294 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":2000,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:55:03.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 16:55:04.277: INFO: Creating ReplicaSet my-hostname-basic-24c36aeb-d45c-4912-91f5-acb83d6dfb95 May 12 16:55:04.378: INFO: Pod name my-hostname-basic-24c36aeb-d45c-4912-91f5-acb83d6dfb95: Found 0 pods out of 1 May 12 16:55:09.449: INFO: Pod name my-hostname-basic-24c36aeb-d45c-4912-91f5-acb83d6dfb95: Found 1 pods out of 1 May 12 16:55:09.449: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-24c36aeb-d45c-4912-91f5-acb83d6dfb95" is running May 12 16:55:09.451: INFO: Pod "my-hostname-basic-24c36aeb-d45c-4912-91f5-acb83d6dfb95-lnnvw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 16:55:04 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 16:55:08 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 16:55:08 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 16:55:04 +0000 UTC Reason: Message:}]) May 12 16:55:09.451: INFO: Trying to dial the pod May 12 16:55:14.461: INFO: Controller my-hostname-basic-24c36aeb-d45c-4912-91f5-acb83d6dfb95: Got expected result from replica 1 [my-hostname-basic-24c36aeb-d45c-4912-91f5-acb83d6dfb95-lnnvw]: "my-hostname-basic-24c36aeb-d45c-4912-91f5-acb83d6dfb95-lnnvw", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:55:14.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9780" for this suite. • [SLOW TEST:10.523 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":116,"skipped":2028,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:55:14.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all May 12 16:55:14.633: INFO: Waiting up to 5m0s for pod "client-containers-a5e9eee1-957d-4465-bd53-e2d404ac2a5c" in namespace "containers-6706" to be "success or failure" May 12 16:55:14.656: INFO: Pod "client-containers-a5e9eee1-957d-4465-bd53-e2d404ac2a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.816489ms May 12 16:55:16.659: INFO: Pod "client-containers-a5e9eee1-957d-4465-bd53-e2d404ac2a5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025408705s May 12 16:55:18.850: INFO: Pod "client-containers-a5e9eee1-957d-4465-bd53-e2d404ac2a5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.216349554s STEP: Saw pod success May 12 16:55:18.850: INFO: Pod "client-containers-a5e9eee1-957d-4465-bd53-e2d404ac2a5c" satisfied condition "success or failure" May 12 16:55:18.852: INFO: Trying to get logs from node jerma-worker pod client-containers-a5e9eee1-957d-4465-bd53-e2d404ac2a5c container test-container: STEP: delete the pod May 12 16:55:19.008: INFO: Waiting for pod client-containers-a5e9eee1-957d-4465-bd53-e2d404ac2a5c to disappear May 12 16:55:19.046: INFO: Pod client-containers-a5e9eee1-957d-4465-bd53-e2d404ac2a5c no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:55:19.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6706" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":2028,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:55:19.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-3aca8e74-616c-46ec-9d8e-a4c7a48a725e STEP: Creating a pod to test consume configMaps May 12 16:55:19.303: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-83b4e49f-ea23-45d9-aa26-7fbb4fd08a05" in namespace "projected-1162" to be "success or failure" May 12 16:55:19.334: INFO: Pod "pod-projected-configmaps-83b4e49f-ea23-45d9-aa26-7fbb4fd08a05": Phase="Pending", Reason="", readiness=false. Elapsed: 31.302405ms May 12 16:55:21.338: INFO: Pod "pod-projected-configmaps-83b4e49f-ea23-45d9-aa26-7fbb4fd08a05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03465741s May 12 16:55:23.696: INFO: Pod "pod-projected-configmaps-83b4e49f-ea23-45d9-aa26-7fbb4fd08a05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.392897896s May 12 16:55:25.701: INFO: Pod "pod-projected-configmaps-83b4e49f-ea23-45d9-aa26-7fbb4fd08a05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.397923052s STEP: Saw pod success May 12 16:55:25.701: INFO: Pod "pod-projected-configmaps-83b4e49f-ea23-45d9-aa26-7fbb4fd08a05" satisfied condition "success or failure" May 12 16:55:25.709: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-83b4e49f-ea23-45d9-aa26-7fbb4fd08a05 container projected-configmap-volume-test: STEP: delete the pod May 12 16:55:25.925: INFO: Waiting for pod pod-projected-configmaps-83b4e49f-ea23-45d9-aa26-7fbb4fd08a05 to disappear May 12 16:55:25.996: INFO: Pod pod-projected-configmaps-83b4e49f-ea23-45d9-aa26-7fbb4fd08a05 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:55:25.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1162" for this suite. • [SLOW TEST:7.002 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":2035,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:55:26.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition May 12 16:55:26.182: INFO: Waiting up to 5m0s for pod "var-expansion-0d074485-dace-4dce-820e-9b9a75f78d3c" in namespace "var-expansion-4898" to be "success or failure" May 12 16:55:26.228: INFO: Pod "var-expansion-0d074485-dace-4dce-820e-9b9a75f78d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 45.079771ms May 12 16:55:28.233: INFO: Pod "var-expansion-0d074485-dace-4dce-820e-9b9a75f78d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050951455s May 12 16:55:30.642: INFO: Pod "var-expansion-0d074485-dace-4dce-820e-9b9a75f78d3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.459684194s May 12 16:55:32.645: INFO: Pod "var-expansion-0d074485-dace-4dce-820e-9b9a75f78d3c": Phase="Running", Reason="", readiness=true. Elapsed: 6.462097094s May 12 16:55:34.648: INFO: Pod "var-expansion-0d074485-dace-4dce-820e-9b9a75f78d3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.465437226s STEP: Saw pod success May 12 16:55:34.648: INFO: Pod "var-expansion-0d074485-dace-4dce-820e-9b9a75f78d3c" satisfied condition "success or failure" May 12 16:55:34.650: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-0d074485-dace-4dce-820e-9b9a75f78d3c container dapi-container: STEP: delete the pod May 12 16:55:35.171: INFO: Waiting for pod var-expansion-0d074485-dace-4dce-820e-9b9a75f78d3c to disappear May 12 16:55:35.240: INFO: Pod var-expansion-0d074485-dace-4dce-820e-9b9a75f78d3c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:55:35.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4898" for this suite. • [SLOW TEST:9.348 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":2039,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:55:35.404: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 16:55:35.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 12 16:55:36.471: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-12T16:55:36Z generation:1 name:name1 resourceVersion:15618938 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cc5cca2c-02db-4ce7-96c0-589b2e7c9210] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 12 16:55:46.477: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-12T16:55:46Z generation:1 name:name2 resourceVersion:15618977 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:41dfbf4f-59e7-40ee-80b6-de1f7e1b925b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 12 16:55:56.508: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-12T16:55:36Z generation:2 name:name1 resourceVersion:15619004 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cc5cca2c-02db-4ce7-96c0-589b2e7c9210] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 12 16:56:06.608: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-12T16:55:46Z generation:2 name:name2 resourceVersion:15619031 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:41dfbf4f-59e7-40ee-80b6-de1f7e1b925b] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 12 16:56:16.615: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-12T16:55:36Z generation:2 name:name1 resourceVersion:15619059 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:cc5cca2c-02db-4ce7-96c0-589b2e7c9210] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 12 16:56:26.795: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-12T16:55:46Z generation:2 name:name2 resourceVersion:15619089 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:41dfbf4f-59e7-40ee-80b6-de1f7e1b925b] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:56:37.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-6083" for this suite. • [SLOW TEST:62.635 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":120,"skipped":2043,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:56:38.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4898 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-4898 May 12 16:56:38.448: INFO: Found 0 stateful pods, waiting for 1 May 12 16:56:48.697: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 12 16:56:48.889: INFO: Deleting all statefulset in ns statefulset-4898 May 12 16:56:49.072: INFO: Scaling statefulset ss to 0 May 12 16:56:59.451: INFO: Waiting for statefulset status.replicas updated to 0 May 12 16:56:59.454: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:56:59.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4898" for this suite. • [SLOW TEST:21.877 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":121,"skipped":2049,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:56:59.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-0f2f9f0d-8e28-4497-8339-212a73b97b1e STEP: Creating configMap with name cm-test-opt-upd-20f9953e-03f9-4c5f-b546-5c7d0ceb7f3c STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-0f2f9f0d-8e28-4497-8339-212a73b97b1e STEP: Updating configmap cm-test-opt-upd-20f9953e-03f9-4c5f-b546-5c7d0ceb7f3c STEP: Creating configMap with name cm-test-opt-create-99f235a8-1f38-435b-8e69-7b0718f10d3a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:58:48.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9625" for this suite. • [SLOW TEST:108.264 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2053,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:58:48.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller May 12 16:58:48.379: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6234' May 12 16:59:05.023: INFO: stderr: "" May 12 16:59:05.023: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 16:59:05.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6234' May 12 16:59:05.555: INFO: stderr: "" May 12 16:59:05.555: INFO: stdout: "update-demo-nautilus-bdtld " STEP: Replicas for name=update-demo: expected=2 actual=1 May 12 16:59:10.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6234' May 12 16:59:10.746: INFO: stderr: "" May 12 16:59:10.746: INFO: stdout: "update-demo-nautilus-bdtld update-demo-nautilus-h52p9 " May 12 16:59:10.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bdtld -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6234' May 12 16:59:11.092: INFO: stderr: "" May 12 16:59:11.092: INFO: stdout: "true" May 12 16:59:11.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bdtld -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6234' May 12 16:59:11.430: INFO: stderr: "" May 12 16:59:11.430: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 16:59:11.430: INFO: validating pod update-demo-nautilus-bdtld May 12 16:59:11.801: INFO: got data: { "image": "nautilus.jpg" } May 12 16:59:11.801: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 16:59:11.801: INFO: update-demo-nautilus-bdtld is verified up and running May 12 16:59:11.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h52p9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6234' May 12 16:59:11.996: INFO: stderr: "" May 12 16:59:11.996: INFO: stdout: "" May 12 16:59:11.996: INFO: update-demo-nautilus-h52p9 is created but not running May 12 16:59:16.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6234' May 12 16:59:17.075: INFO: stderr: "" May 12 16:59:17.075: INFO: stdout: "update-demo-nautilus-bdtld update-demo-nautilus-h52p9 " May 12 16:59:17.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bdtld -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6234' May 12 16:59:17.174: INFO: stderr: "" May 12 16:59:17.174: INFO: stdout: "true" May 12 16:59:17.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bdtld -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6234' May 12 16:59:17.258: INFO: stderr: "" May 12 16:59:17.258: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 16:59:17.258: INFO: validating pod update-demo-nautilus-bdtld May 12 16:59:17.262: INFO: got data: { "image": "nautilus.jpg" } May 12 16:59:17.262: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 16:59:17.262: INFO: update-demo-nautilus-bdtld is verified up and running May 12 16:59:17.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h52p9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6234' May 12 16:59:17.351: INFO: stderr: "" May 12 16:59:17.351: INFO: stdout: "true" May 12 16:59:17.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h52p9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6234' May 12 16:59:17.429: INFO: stderr: "" May 12 16:59:17.429: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 16:59:17.429: INFO: validating pod update-demo-nautilus-h52p9 May 12 16:59:17.432: INFO: got data: { "image": "nautilus.jpg" } May 12 16:59:17.432: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 16:59:17.432: INFO: update-demo-nautilus-h52p9 is verified up and running STEP: rolling-update to new replication controller May 12 16:59:17.435: INFO: scanned /root for discovery docs: May 12 16:59:17.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6234' May 12 16:59:40.472: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 12 16:59:40.472: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 16:59:40.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6234' May 12 16:59:40.827: INFO: stderr: "" May 12 16:59:40.827: INFO: stdout: "update-demo-kitten-npnjs update-demo-kitten-tt7t7 " May 12 16:59:40.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-npnjs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6234' May 12 16:59:40.981: INFO: stderr: "" May 12 16:59:40.981: INFO: stdout: "true" May 12 16:59:40.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-npnjs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6234' May 12 16:59:41.080: INFO: stderr: "" May 12 16:59:41.080: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 12 16:59:41.080: INFO: validating pod update-demo-kitten-npnjs May 12 16:59:41.083: INFO: got data: { "image": "kitten.jpg" } May 12 16:59:41.083: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 12 16:59:41.083: INFO: update-demo-kitten-npnjs is verified up and running May 12 16:59:41.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tt7t7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6234' May 12 16:59:41.169: INFO: stderr: "" May 12 16:59:41.169: INFO: stdout: "true" May 12 16:59:41.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tt7t7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6234' May 12 16:59:41.254: INFO: stderr: "" May 12 16:59:41.254: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 12 16:59:41.254: INFO: validating pod update-demo-kitten-tt7t7 May 12 16:59:41.258: INFO: got data: { "image": "kitten.jpg" } May 12 16:59:41.258: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 12 16:59:41.258: INFO: update-demo-kitten-tt7t7 is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 16:59:41.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6234" for this suite. • [SLOW TEST:53.083 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":123,"skipped":2068,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 16:59:41.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 12 16:59:41.406: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3219 /api/v1/namespaces/watch-3219/configmaps/e2e-watch-test-configmap-a c1a44fa4-4dac-43dc-b659-06f69af13638 15619834 0 2020-05-12 16:59:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 16:59:41.406: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3219 /api/v1/namespaces/watch-3219/configmaps/e2e-watch-test-configmap-a c1a44fa4-4dac-43dc-b659-06f69af13638 15619834 0 2020-05-12 16:59:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 12 16:59:51.413: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3219 /api/v1/namespaces/watch-3219/configmaps/e2e-watch-test-configmap-a c1a44fa4-4dac-43dc-b659-06f69af13638 15619896 0 2020-05-12 16:59:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 12 16:59:51.413: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3219 /api/v1/namespaces/watch-3219/configmaps/e2e-watch-test-configmap-a c1a44fa4-4dac-43dc-b659-06f69af13638 15619896 0 2020-05-12 16:59:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 12 17:00:02.298: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3219 /api/v1/namespaces/watch-3219/configmaps/e2e-watch-test-configmap-a c1a44fa4-4dac-43dc-b659-06f69af13638 15619929 0 2020-05-12 16:59:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 17:00:02.298: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3219 /api/v1/namespaces/watch-3219/configmaps/e2e-watch-test-configmap-a c1a44fa4-4dac-43dc-b659-06f69af13638 15619929 0 2020-05-12 16:59:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 12 17:00:12.312: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3219 /api/v1/namespaces/watch-3219/configmaps/e2e-watch-test-configmap-a c1a44fa4-4dac-43dc-b659-06f69af13638 15619956 0 2020-05-12 16:59:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 17:00:12.313: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3219 /api/v1/namespaces/watch-3219/configmaps/e2e-watch-test-configmap-a c1a44fa4-4dac-43dc-b659-06f69af13638 15619956 0 2020-05-12 16:59:41 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 12 17:00:22.318: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3219 /api/v1/namespaces/watch-3219/configmaps/e2e-watch-test-configmap-b 84ef0e99-a047-486c-8991-ac6a6ce985d2 15619989 0 2020-05-12 17:00:22 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 17:00:22.318: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3219 /api/v1/namespaces/watch-3219/configmaps/e2e-watch-test-configmap-b 84ef0e99-a047-486c-8991-ac6a6ce985d2 15619989 0 2020-05-12 17:00:22 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 12 17:00:32.634: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3219 /api/v1/namespaces/watch-3219/configmaps/e2e-watch-test-configmap-b 84ef0e99-a047-486c-8991-ac6a6ce985d2 15620017 0 2020-05-12 17:00:22 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 17:00:32.634: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3219 /api/v1/namespaces/watch-3219/configmaps/e2e-watch-test-configmap-b 84ef0e99-a047-486c-8991-ac6a6ce985d2 15620017 0 2020-05-12 17:00:22 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:00:42.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3219" for this suite. • [SLOW TEST:61.377 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":124,"skipped":2083,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:00:42.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-7710 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7710 STEP: creating replication controller externalsvc in namespace services-7710 I0512 17:00:45.079282 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7710, replica count: 2 I0512 17:00:48.129765 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:00:51.129992 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:00:54.130182 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:00:57.130386 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 12 17:00:57.316: INFO: Creating new exec pod May 12 17:01:03.993: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7710 execpod8srqj -- /bin/sh -x -c nslookup clusterip-service' May 12 17:01:04.208: INFO: stderr: "I0512 17:01:04.120510 2788 log.go:172] (0xc00096cd10) (0xc000948320) Create stream\nI0512 17:01:04.120560 2788 log.go:172] (0xc00096cd10) (0xc000948320) Stream added, broadcasting: 1\nI0512 17:01:04.124388 2788 log.go:172] (0xc00096cd10) Reply frame received for 1\nI0512 17:01:04.124429 2788 log.go:172] (0xc00096cd10) (0xc0005e26e0) Create stream\nI0512 17:01:04.124449 2788 log.go:172] (0xc00096cd10) (0xc0005e26e0) Stream added, broadcasting: 3\nI0512 17:01:04.125677 2788 log.go:172] (0xc00096cd10) Reply frame received for 3\nI0512 17:01:04.125711 2788 log.go:172] (0xc00096cd10) (0xc0003d74a0) Create stream\nI0512 17:01:04.125721 2788 log.go:172] (0xc00096cd10) (0xc0003d74a0) Stream added, broadcasting: 5\nI0512 17:01:04.126599 2788 log.go:172] (0xc00096cd10) Reply frame received for 5\nI0512 17:01:04.193575 2788 log.go:172] (0xc00096cd10) Data frame received for 5\nI0512 17:01:04.193606 2788 log.go:172] (0xc0003d74a0) (5) Data frame handling\nI0512 17:01:04.193644 2788 log.go:172] (0xc0003d74a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0512 17:01:04.200495 2788 log.go:172] (0xc00096cd10) Data frame received for 3\nI0512 17:01:04.200520 2788 log.go:172] (0xc0005e26e0) (3) Data frame handling\nI0512 17:01:04.200539 2788 log.go:172] (0xc0005e26e0) (3) Data frame sent\nI0512 17:01:04.201777 2788 log.go:172] (0xc00096cd10) Data frame received for 3\nI0512 17:01:04.201800 2788 log.go:172] (0xc0005e26e0) (3) Data frame handling\nI0512 17:01:04.201820 2788 log.go:172] (0xc0005e26e0) (3) Data frame sent\nI0512 17:01:04.202116 2788 log.go:172] (0xc00096cd10) Data frame received for 3\nI0512 17:01:04.202136 2788 log.go:172] (0xc0005e26e0) (3) Data frame handling\nI0512 17:01:04.202160 2788 log.go:172] (0xc00096cd10) Data frame received for 5\nI0512 17:01:04.202171 2788 log.go:172] (0xc0003d74a0) (5) Data frame handling\nI0512 17:01:04.203695 2788 log.go:172] (0xc00096cd10) Data frame received for 1\nI0512 17:01:04.203720 2788 log.go:172] (0xc000948320) (1) Data frame handling\nI0512 17:01:04.203734 2788 log.go:172] (0xc000948320) (1) Data frame sent\nI0512 17:01:04.203749 2788 log.go:172] (0xc00096cd10) (0xc000948320) Stream removed, broadcasting: 1\nI0512 17:01:04.203766 2788 log.go:172] (0xc00096cd10) Go away received\nI0512 17:01:04.204075 2788 log.go:172] (0xc00096cd10) (0xc000948320) Stream removed, broadcasting: 1\nI0512 17:01:04.204090 2788 log.go:172] (0xc00096cd10) (0xc0005e26e0) Stream removed, broadcasting: 3\nI0512 17:01:04.204096 2788 log.go:172] (0xc00096cd10) (0xc0003d74a0) Stream removed, broadcasting: 5\n" May 12 17:01:04.208: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-7710.svc.cluster.local\tcanonical name = externalsvc.services-7710.svc.cluster.local.\nName:\texternalsvc.services-7710.svc.cluster.local\nAddress: 10.103.251.194\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7710, will wait for the garbage collector to delete the pods May 12 17:01:04.266: INFO: Deleting ReplicationController externalsvc took: 5.096264ms May 12 17:01:04.566: INFO: Terminating ReplicationController externalsvc pods took: 300.219679ms May 12 17:01:10.130: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:01:10.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7710" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:27.543 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":125,"skipped":2083,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:01:10.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-0d13233f-63a7-4fdd-95a8-45608c6e6376 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:01:20.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2637" for this suite. • [SLOW TEST:10.340 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2092,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:01:20.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 12 17:01:21.096: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8eb38096-dfd0-437d-8f5e-b83443ac508f" in namespace "downward-api-5601" to be "success or failure" May 12 17:01:21.106: INFO: Pod "downwardapi-volume-8eb38096-dfd0-437d-8f5e-b83443ac508f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.072381ms May 12 17:01:23.185: INFO: Pod "downwardapi-volume-8eb38096-dfd0-437d-8f5e-b83443ac508f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088312704s May 12 17:01:25.474: INFO: Pod "downwardapi-volume-8eb38096-dfd0-437d-8f5e-b83443ac508f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.377933169s May 12 17:01:27.738: INFO: Pod "downwardapi-volume-8eb38096-dfd0-437d-8f5e-b83443ac508f": Phase="Running", Reason="", readiness=true. Elapsed: 6.641282644s May 12 17:01:29.748: INFO: Pod "downwardapi-volume-8eb38096-dfd0-437d-8f5e-b83443ac508f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.651177246s STEP: Saw pod success May 12 17:01:29.748: INFO: Pod "downwardapi-volume-8eb38096-dfd0-437d-8f5e-b83443ac508f" satisfied condition "success or failure" May 12 17:01:29.750: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8eb38096-dfd0-437d-8f5e-b83443ac508f container client-container: STEP: delete the pod May 12 17:01:29.808: INFO: Waiting for pod downwardapi-volume-8eb38096-dfd0-437d-8f5e-b83443ac508f to disappear May 12 17:01:30.072: INFO: Pod downwardapi-volume-8eb38096-dfd0-437d-8f5e-b83443ac508f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:01:30.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5601" for this suite. • [SLOW TEST:10.267 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2106,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:01:30.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:01:31.768: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 12 17:01:35.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7091 create -f -' May 12 17:01:36.166: INFO: stderr: "" May 12 17:01:36.166: INFO: stdout: "e2e-test-crd-publish-openapi-2619-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 12 17:01:36.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7091 delete e2e-test-crd-publish-openapi-2619-crds test-cr' May 12 17:01:36.354: INFO: stderr: "" May 12 17:01:36.354: INFO: stdout: "e2e-test-crd-publish-openapi-2619-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 12 17:01:36.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7091 apply -f -' May 12 17:01:36.689: INFO: stderr: "" May 12 17:01:36.690: INFO: stdout: "e2e-test-crd-publish-openapi-2619-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 12 17:01:36.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7091 delete e2e-test-crd-publish-openapi-2619-crds test-cr' May 12 17:01:36.817: INFO: stderr: "" May 12 17:01:36.817: INFO: stdout: "e2e-test-crd-publish-openapi-2619-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 12 17:01:36.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2619-crds' May 12 17:01:37.177: INFO: stderr: "" May 12 17:01:37.177: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2619-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:01:40.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7091" for this suite. • [SLOW TEST:9.789 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":128,"skipped":2138,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:01:40.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 12 17:01:48.448: INFO: 9 pods remaining May 12 17:01:48.448: INFO: 0 pods has nil DeletionTimestamp May 12 17:01:48.448: INFO: May 12 17:01:49.321: INFO: 0 pods remaining May 12 17:01:49.321: INFO: 0 pods has nil DeletionTimestamp May 12 17:01:49.321: INFO: May 12 17:01:50.372: INFO: 0 pods remaining May 12 17:01:50.372: INFO: 0 pods has nil DeletionTimestamp May 12 17:01:50.372: INFO: STEP: Gathering metrics W0512 17:01:51.183114 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 17:01:51.183: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:01:51.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4588" for this suite. • [SLOW TEST:10.606 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":129,"skipped":2162,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:01:51.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 12 17:01:53.015: INFO: Waiting up to 5m0s for pod "pod-d8d100ed-304d-426a-91c9-98e38977b3ae" in namespace "emptydir-2736" to be "success or failure" May 12 17:01:54.127: INFO: Pod "pod-d8d100ed-304d-426a-91c9-98e38977b3ae": Phase="Pending", Reason="", readiness=false. Elapsed: 1.111874612s May 12 17:01:56.402: INFO: Pod "pod-d8d100ed-304d-426a-91c9-98e38977b3ae": Phase="Pending", Reason="", readiness=false. Elapsed: 3.387169525s May 12 17:01:58.845: INFO: Pod "pod-d8d100ed-304d-426a-91c9-98e38977b3ae": Phase="Pending", Reason="", readiness=false. Elapsed: 5.830368029s May 12 17:02:01.000: INFO: Pod "pod-d8d100ed-304d-426a-91c9-98e38977b3ae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.985137174s STEP: Saw pod success May 12 17:02:01.000: INFO: Pod "pod-d8d100ed-304d-426a-91c9-98e38977b3ae" satisfied condition "success or failure" May 12 17:02:01.052: INFO: Trying to get logs from node jerma-worker pod pod-d8d100ed-304d-426a-91c9-98e38977b3ae container test-container: STEP: delete the pod May 12 17:02:01.905: INFO: Waiting for pod pod-d8d100ed-304d-426a-91c9-98e38977b3ae to disappear May 12 17:02:01.927: INFO: Pod pod-d8d100ed-304d-426a-91c9-98e38977b3ae no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:02:01.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2736" for this suite. • [SLOW TEST:11.390 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2163,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:02:02.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 12 17:02:03.857: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8838 /api/v1/namespaces/watch-8838/configmaps/e2e-watch-test-label-changed 53c8c694-f4f0-443b-9df9-d40aa269a2b3 15620558 0 2020-05-12 17:02:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 12 17:02:03.857: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8838 /api/v1/namespaces/watch-8838/configmaps/e2e-watch-test-label-changed 53c8c694-f4f0-443b-9df9-d40aa269a2b3 15620560 0 2020-05-12 17:02:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 12 17:02:03.857: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8838 /api/v1/namespaces/watch-8838/configmaps/e2e-watch-test-label-changed 53c8c694-f4f0-443b-9df9-d40aa269a2b3 15620562 0 2020-05-12 17:02:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 12 17:02:14.252: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8838 /api/v1/namespaces/watch-8838/configmaps/e2e-watch-test-label-changed 53c8c694-f4f0-443b-9df9-d40aa269a2b3 15620607 0 2020-05-12 17:02:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 12 17:02:14.253: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8838 /api/v1/namespaces/watch-8838/configmaps/e2e-watch-test-label-changed 53c8c694-f4f0-443b-9df9-d40aa269a2b3 15620608 0 2020-05-12 17:02:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 12 17:02:14.253: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-8838 /api/v1/namespaces/watch-8838/configmaps/e2e-watch-test-label-changed 53c8c694-f4f0-443b-9df9-d40aa269a2b3 15620609 0 2020-05-12 17:02:03 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:02:14.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8838" for this suite. • [SLOW TEST:11.696 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":131,"skipped":2166,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:02:14.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args May 12 17:02:14.414: INFO: Waiting up to 5m0s for pod "var-expansion-f0787c61-5d75-42f6-b18f-05cdafe66a38" in namespace "var-expansion-6961" to be "success or failure" May 12 17:02:14.431: INFO: Pod "var-expansion-f0787c61-5d75-42f6-b18f-05cdafe66a38": Phase="Pending", Reason="", readiness=false. Elapsed: 16.284627ms May 12 17:02:16.467: INFO: Pod "var-expansion-f0787c61-5d75-42f6-b18f-05cdafe66a38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053092533s May 12 17:02:18.522: INFO: Pod "var-expansion-f0787c61-5d75-42f6-b18f-05cdafe66a38": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107408418s May 12 17:02:20.525: INFO: Pod "var-expansion-f0787c61-5d75-42f6-b18f-05cdafe66a38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.110792482s STEP: Saw pod success May 12 17:02:20.525: INFO: Pod "var-expansion-f0787c61-5d75-42f6-b18f-05cdafe66a38" satisfied condition "success or failure" May 12 17:02:20.527: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-f0787c61-5d75-42f6-b18f-05cdafe66a38 container dapi-container: STEP: delete the pod May 12 17:02:20.552: INFO: Waiting for pod var-expansion-f0787c61-5d75-42f6-b18f-05cdafe66a38 to disappear May 12 17:02:20.557: INFO: Pod var-expansion-f0787c61-5d75-42f6-b18f-05cdafe66a38 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:02:20.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6961" for this suite. • [SLOW TEST:6.289 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2170,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:02:20.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 12 17:02:20.875: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3a7524bd-5f7d-4f91-958a-81b90fbf6fd8" in namespace "projected-2754" to be "success or failure" May 12 17:02:20.909: INFO: Pod "downwardapi-volume-3a7524bd-5f7d-4f91-958a-81b90fbf6fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 33.864353ms May 12 17:02:22.976: INFO: Pod "downwardapi-volume-3a7524bd-5f7d-4f91-958a-81b90fbf6fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100954823s May 12 17:02:25.005: INFO: Pod "downwardapi-volume-3a7524bd-5f7d-4f91-958a-81b90fbf6fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.129844629s May 12 17:02:27.371: INFO: Pod "downwardapi-volume-3a7524bd-5f7d-4f91-958a-81b90fbf6fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.495986371s May 12 17:02:29.375: INFO: Pod "downwardapi-volume-3a7524bd-5f7d-4f91-958a-81b90fbf6fd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.500018958s STEP: Saw pod success May 12 17:02:29.376: INFO: Pod "downwardapi-volume-3a7524bd-5f7d-4f91-958a-81b90fbf6fd8" satisfied condition "success or failure" May 12 17:02:29.379: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3a7524bd-5f7d-4f91-958a-81b90fbf6fd8 container client-container: STEP: delete the pod May 12 17:02:30.008: INFO: Waiting for pod downwardapi-volume-3a7524bd-5f7d-4f91-958a-81b90fbf6fd8 to disappear May 12 17:02:30.018: INFO: Pod downwardapi-volume-3a7524bd-5f7d-4f91-958a-81b90fbf6fd8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:02:30.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2754" for this suite. • [SLOW TEST:9.469 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2187,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:02:30.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:02:30.244: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-663 I0512 17:02:30.266316 7 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-663, replica count: 1 I0512 17:02:31.316715 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:02:32.316911 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:02:33.317098 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:02:34.317435 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:02:35.317611 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:02:36.317815 7 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 17:02:36.552: INFO: Created: latency-svc-56vlm May 12 17:02:36.556: INFO: Got endpoints: latency-svc-56vlm [138.957692ms] May 12 17:02:36.635: INFO: Created: latency-svc-wcz5d May 12 17:02:36.785: INFO: Got endpoints: latency-svc-wcz5d [228.508328ms] May 12 17:02:36.833: INFO: Created: latency-svc-28h99 May 12 17:02:36.935: INFO: Got endpoints: latency-svc-28h99 [377.614129ms] May 12 17:02:36.995: INFO: Created: latency-svc-jxx25 May 12 17:02:37.163: INFO: Got endpoints: latency-svc-jxx25 [605.906665ms] May 12 17:02:37.168: INFO: Created: latency-svc-4lcqw May 12 17:02:37.218: INFO: Got endpoints: latency-svc-4lcqw [660.708423ms] May 12 17:02:37.403: INFO: Created: latency-svc-vw4b9 May 12 17:02:37.434: INFO: Got endpoints: latency-svc-vw4b9 [876.034428ms] May 12 17:02:37.494: INFO: Created: latency-svc-5nmhb May 12 17:02:37.659: INFO: Got endpoints: latency-svc-5nmhb [1.101613638s] May 12 17:02:37.662: INFO: Created: latency-svc-kkhkl May 12 17:02:37.752: INFO: Got endpoints: latency-svc-kkhkl [1.193927791s] May 12 17:02:37.907: INFO: Created: latency-svc-cs568 May 12 17:02:37.911: INFO: Got endpoints: latency-svc-cs568 [1.352427927s] May 12 17:02:38.108: INFO: Created: latency-svc-2b7gr May 12 17:02:38.173: INFO: Got endpoints: latency-svc-2b7gr [1.615258705s] May 12 17:02:38.288: INFO: Created: latency-svc-vcxv4 May 12 17:02:38.334: INFO: Got endpoints: latency-svc-vcxv4 [1.775645289s] May 12 17:02:38.910: INFO: Created: latency-svc-l248l May 12 17:02:39.137: INFO: Got endpoints: latency-svc-l248l [2.57910473s] May 12 17:02:39.330: INFO: Created: latency-svc-ww527 May 12 17:02:39.637: INFO: Got endpoints: latency-svc-ww527 [3.078125952s] May 12 17:02:39.713: INFO: Created: latency-svc-sx98f May 12 17:02:39.729: INFO: Got endpoints: latency-svc-sx98f [3.169677446s] May 12 17:02:39.917: INFO: Created: latency-svc-556wv May 12 17:02:39.921: INFO: Got endpoints: latency-svc-556wv [3.361310741s] May 12 17:02:39.978: INFO: Created: latency-svc-nz5tn May 12 17:02:40.000: INFO: Got endpoints: latency-svc-nz5tn [3.44077949s] May 12 17:02:40.108: INFO: Created: latency-svc-7w4ph May 12 17:02:40.119: INFO: Got endpoints: latency-svc-7w4ph [3.333462221s] May 12 17:02:40.152: INFO: Created: latency-svc-5qhdq May 12 17:02:40.174: INFO: Got endpoints: latency-svc-5qhdq [3.238973904s] May 12 17:02:40.252: INFO: Created: latency-svc-4zbdh May 12 17:02:40.284: INFO: Got endpoints: latency-svc-4zbdh [3.120906592s] May 12 17:02:40.285: INFO: Created: latency-svc-k2kzx May 12 17:02:40.324: INFO: Got endpoints: latency-svc-k2kzx [3.105307641s] May 12 17:02:40.408: INFO: Created: latency-svc-2tnc4 May 12 17:02:40.422: INFO: Got endpoints: latency-svc-2tnc4 [2.988212203s] May 12 17:02:40.471: INFO: Created: latency-svc-8jkws May 12 17:02:40.492: INFO: Got endpoints: latency-svc-8jkws [2.832777848s] May 12 17:02:40.584: INFO: Created: latency-svc-fdj5h May 12 17:02:40.619: INFO: Got endpoints: latency-svc-fdj5h [2.867543254s] May 12 17:02:40.668: INFO: Created: latency-svc-jhd2f May 12 17:02:40.749: INFO: Got endpoints: latency-svc-jhd2f [2.838170056s] May 12 17:02:40.806: INFO: Created: latency-svc-p76tr May 12 17:02:40.916: INFO: Got endpoints: latency-svc-p76tr [2.742760603s] May 12 17:02:40.926: INFO: Created: latency-svc-klldl May 12 17:02:40.949: INFO: Got endpoints: latency-svc-klldl [2.614357456s] May 12 17:02:41.073: INFO: Created: latency-svc-ngd2c May 12 17:02:41.119: INFO: Got endpoints: latency-svc-ngd2c [1.981143323s] May 12 17:02:41.119: INFO: Created: latency-svc-9xltw May 12 17:02:41.147: INFO: Got endpoints: latency-svc-9xltw [1.5099408s] May 12 17:02:41.326: INFO: Created: latency-svc-nvptw May 12 17:02:41.330: INFO: Got endpoints: latency-svc-nvptw [1.601224036s] May 12 17:02:41.534: INFO: Created: latency-svc-6cdbx May 12 17:02:41.612: INFO: Created: latency-svc-kzr6t May 12 17:02:41.614: INFO: Got endpoints: latency-svc-6cdbx [1.69307568s] May 12 17:02:41.737: INFO: Got endpoints: latency-svc-kzr6t [1.737469953s] May 12 17:02:41.743: INFO: Created: latency-svc-fvq8v May 12 17:02:41.778: INFO: Got endpoints: latency-svc-fvq8v [1.65941195s] May 12 17:02:41.929: INFO: Created: latency-svc-jscl6 May 12 17:02:41.942: INFO: Got endpoints: latency-svc-jscl6 [1.767935382s] May 12 17:02:42.121: INFO: Created: latency-svc-pmjlh May 12 17:02:42.150: INFO: Got endpoints: latency-svc-pmjlh [1.865632483s] May 12 17:02:42.270: INFO: Created: latency-svc-47th6 May 12 17:02:42.276: INFO: Got endpoints: latency-svc-47th6 [1.952004385s] May 12 17:02:42.319: INFO: Created: latency-svc-5g9br May 12 17:02:42.414: INFO: Got endpoints: latency-svc-5g9br [1.991738269s] May 12 17:02:42.441: INFO: Created: latency-svc-nxlcq May 12 17:02:42.456: INFO: Got endpoints: latency-svc-nxlcq [1.96369187s] May 12 17:02:42.558: INFO: Created: latency-svc-zk8h6 May 12 17:02:42.598: INFO: Got endpoints: latency-svc-zk8h6 [1.978421197s] May 12 17:02:42.738: INFO: Created: latency-svc-75ksh May 12 17:02:42.790: INFO: Got endpoints: latency-svc-75ksh [2.040456689s] May 12 17:02:42.791: INFO: Created: latency-svc-tddz7 May 12 17:02:42.829: INFO: Got endpoints: latency-svc-tddz7 [1.912610798s] May 12 17:02:42.887: INFO: Created: latency-svc-5dddk May 12 17:02:42.902: INFO: Got endpoints: latency-svc-5dddk [1.952794364s] May 12 17:02:42.935: INFO: Created: latency-svc-7qflh May 12 17:02:42.949: INFO: Got endpoints: latency-svc-7qflh [1.830752027s] May 12 17:02:43.032: INFO: Created: latency-svc-h94pj May 12 17:02:43.045: INFO: Got endpoints: latency-svc-h94pj [1.898033907s] May 12 17:02:43.103: INFO: Created: latency-svc-mdrk6 May 12 17:02:43.193: INFO: Got endpoints: latency-svc-mdrk6 [1.862956478s] May 12 17:02:43.235: INFO: Created: latency-svc-qqcv9 May 12 17:02:43.244: INFO: Got endpoints: latency-svc-qqcv9 [1.629960714s] May 12 17:02:43.337: INFO: Created: latency-svc-v66hr May 12 17:02:43.372: INFO: Got endpoints: latency-svc-v66hr [1.63449043s] May 12 17:02:43.486: INFO: Created: latency-svc-wn2dh May 12 17:02:43.502: INFO: Got endpoints: latency-svc-wn2dh [1.723296093s] May 12 17:02:43.566: INFO: Created: latency-svc-658p2 May 12 17:02:43.708: INFO: Got endpoints: latency-svc-658p2 [1.766371471s] May 12 17:02:43.721: INFO: Created: latency-svc-mcjx2 May 12 17:02:43.779: INFO: Got endpoints: latency-svc-mcjx2 [1.628689514s] May 12 17:02:44.136: INFO: Created: latency-svc-ms694 May 12 17:02:44.360: INFO: Got endpoints: latency-svc-ms694 [2.084576598s] May 12 17:02:44.625: INFO: Created: latency-svc-7649r May 12 17:02:45.006: INFO: Created: latency-svc-7rktz May 12 17:02:45.006: INFO: Got endpoints: latency-svc-7649r [2.592175299s] May 12 17:02:45.486: INFO: Got endpoints: latency-svc-7rktz [3.030438612s] May 12 17:02:45.491: INFO: Created: latency-svc-h65hb May 12 17:02:45.830: INFO: Got endpoints: latency-svc-h65hb [3.232015654s] May 12 17:02:46.080: INFO: Created: latency-svc-9kjp2 May 12 17:02:46.084: INFO: Got endpoints: latency-svc-9kjp2 [3.294440644s] May 12 17:02:46.496: INFO: Created: latency-svc-snxn8 May 12 17:02:46.876: INFO: Got endpoints: latency-svc-snxn8 [4.047044007s] May 12 17:02:46.887: INFO: Created: latency-svc-dd8nt May 12 17:02:46.913: INFO: Got endpoints: latency-svc-dd8nt [4.010902776s] May 12 17:02:47.030: INFO: Created: latency-svc-cph6w May 12 17:02:47.064: INFO: Got endpoints: latency-svc-cph6w [4.114474982s] May 12 17:02:47.126: INFO: Created: latency-svc-p59sl May 12 17:02:47.192: INFO: Got endpoints: latency-svc-p59sl [4.146848297s] May 12 17:02:47.234: INFO: Created: latency-svc-fdxk5 May 12 17:02:47.284: INFO: Got endpoints: latency-svc-fdxk5 [4.091221989s] May 12 17:02:47.349: INFO: Created: latency-svc-fzjm7 May 12 17:02:47.357: INFO: Got endpoints: latency-svc-fzjm7 [4.113089624s] May 12 17:02:47.390: INFO: Created: latency-svc-jpw9z May 12 17:02:47.435: INFO: Got endpoints: latency-svc-jpw9z [4.062753541s] May 12 17:02:47.492: INFO: Created: latency-svc-z6wdl May 12 17:02:47.522: INFO: Got endpoints: latency-svc-z6wdl [4.020431476s] May 12 17:02:47.559: INFO: Created: latency-svc-wm4ch May 12 17:02:47.574: INFO: Got endpoints: latency-svc-wm4ch [3.865271118s] May 12 17:02:47.630: INFO: Created: latency-svc-l9qj6 May 12 17:02:47.632: INFO: Got endpoints: latency-svc-l9qj6 [3.853635131s] May 12 17:02:47.661: INFO: Created: latency-svc-c8fwf May 12 17:02:47.670: INFO: Got endpoints: latency-svc-c8fwf [3.310047671s] May 12 17:02:47.691: INFO: Created: latency-svc-drb2z May 12 17:02:47.706: INFO: Got endpoints: latency-svc-drb2z [2.700011263s] May 12 17:02:47.726: INFO: Created: latency-svc-tqkx8 May 12 17:02:47.768: INFO: Got endpoints: latency-svc-tqkx8 [2.281097817s] May 12 17:02:47.780: INFO: Created: latency-svc-hxzd6 May 12 17:02:47.796: INFO: Got endpoints: latency-svc-hxzd6 [1.966330295s] May 12 17:02:47.816: INFO: Created: latency-svc-2glc2 May 12 17:02:47.864: INFO: Got endpoints: latency-svc-2glc2 [1.779908919s] May 12 17:02:47.960: INFO: Created: latency-svc-h68rg May 12 17:02:47.962: INFO: Got endpoints: latency-svc-h68rg [1.0861073s] May 12 17:02:48.008: INFO: Created: latency-svc-rvbl5 May 12 17:02:48.037: INFO: Got endpoints: latency-svc-rvbl5 [1.124193986s] May 12 17:02:48.408: INFO: Created: latency-svc-crm62 May 12 17:02:48.411: INFO: Got endpoints: latency-svc-crm62 [1.346657523s] May 12 17:02:48.578: INFO: Created: latency-svc-6d4hz May 12 17:02:48.588: INFO: Got endpoints: latency-svc-6d4hz [1.396093203s] May 12 17:02:48.608: INFO: Created: latency-svc-5cw4z May 12 17:02:48.612: INFO: Got endpoints: latency-svc-5cw4z [1.328000734s] May 12 17:02:48.638: INFO: Created: latency-svc-wp2bl May 12 17:02:48.655: INFO: Got endpoints: latency-svc-wp2bl [1.297695422s] May 12 17:02:48.707: INFO: Created: latency-svc-zps7l May 12 17:02:48.712: INFO: Got endpoints: latency-svc-zps7l [1.276778596s] May 12 17:02:48.992: INFO: Created: latency-svc-zk9dd May 12 17:02:49.021: INFO: Got endpoints: latency-svc-zk9dd [1.498647199s] May 12 17:02:49.148: INFO: Created: latency-svc-rb8bx May 12 17:02:49.185: INFO: Got endpoints: latency-svc-rb8bx [1.610974851s] May 12 17:02:49.266: INFO: Created: latency-svc-s5qw4 May 12 17:02:49.286: INFO: Got endpoints: latency-svc-s5qw4 [1.653982433s] May 12 17:02:49.379: INFO: Created: latency-svc-m5swt May 12 17:02:49.381: INFO: Got endpoints: latency-svc-m5swt [1.710857736s] May 12 17:02:49.413: INFO: Created: latency-svc-sgdm7 May 12 17:02:49.459: INFO: Got endpoints: latency-svc-sgdm7 [1.752902636s] May 12 17:02:49.684: INFO: Created: latency-svc-2pstz May 12 17:02:49.833: INFO: Got endpoints: latency-svc-2pstz [2.065421085s] May 12 17:02:49.845: INFO: Created: latency-svc-ktrkk May 12 17:02:49.867: INFO: Got endpoints: latency-svc-ktrkk [2.070718135s] May 12 17:02:49.964: INFO: Created: latency-svc-2p2h6 May 12 17:02:49.969: INFO: Got endpoints: latency-svc-2p2h6 [2.105241692s] May 12 17:02:50.050: INFO: Created: latency-svc-9hqtk May 12 17:02:50.277: INFO: Got endpoints: latency-svc-9hqtk [2.3146511s] May 12 17:02:50.289: INFO: Created: latency-svc-9vq2v May 12 17:02:50.486: INFO: Got endpoints: latency-svc-9vq2v [2.44876146s] May 12 17:02:50.536: INFO: Created: latency-svc-qdlpp May 12 17:02:50.762: INFO: Got endpoints: latency-svc-qdlpp [2.350925347s] May 12 17:02:50.807: INFO: Created: latency-svc-9lp89 May 12 17:02:50.851: INFO: Got endpoints: latency-svc-9lp89 [2.262641985s] May 12 17:02:50.977: INFO: Created: latency-svc-7ltf6 May 12 17:02:50.979: INFO: Got endpoints: latency-svc-7ltf6 [2.366525906s] May 12 17:02:51.156: INFO: Created: latency-svc-mkbpr May 12 17:02:51.182: INFO: Got endpoints: latency-svc-mkbpr [2.526976299s] May 12 17:02:51.295: INFO: Created: latency-svc-csd8m May 12 17:02:51.366: INFO: Got endpoints: latency-svc-csd8m [2.653937398s] May 12 17:02:51.455: INFO: Created: latency-svc-gd4p8 May 12 17:02:51.500: INFO: Got endpoints: latency-svc-gd4p8 [2.479032826s] May 12 17:02:51.684: INFO: Created: latency-svc-8zhlc May 12 17:02:51.730: INFO: Got endpoints: latency-svc-8zhlc [2.545372759s] May 12 17:02:51.752: INFO: Created: latency-svc-ljtwr May 12 17:02:51.764: INFO: Got endpoints: latency-svc-ljtwr [2.477174573s] May 12 17:02:51.899: INFO: Created: latency-svc-xz4pf May 12 17:02:51.901: INFO: Got endpoints: latency-svc-xz4pf [2.519987851s] May 12 17:02:51.954: INFO: Created: latency-svc-7ckhr May 12 17:02:52.114: INFO: Got endpoints: latency-svc-7ckhr [2.65524915s] May 12 17:02:52.135: INFO: Created: latency-svc-cqf4f May 12 17:02:52.435: INFO: Got endpoints: latency-svc-cqf4f [2.602418439s] May 12 17:02:52.958: INFO: Created: latency-svc-mhn68 May 12 17:02:53.267: INFO: Got endpoints: latency-svc-mhn68 [3.399541509s] May 12 17:02:53.288: INFO: Created: latency-svc-g6g6d May 12 17:02:53.799: INFO: Got endpoints: latency-svc-g6g6d [3.829532103s] May 12 17:02:54.159: INFO: Created: latency-svc-qg5c7 May 12 17:02:54.166: INFO: Got endpoints: latency-svc-qg5c7 [3.889153247s] May 12 17:02:54.656: INFO: Created: latency-svc-f5zqz May 12 17:02:54.707: INFO: Got endpoints: latency-svc-f5zqz [4.221533971s] May 12 17:02:54.909: INFO: Created: latency-svc-xldfc May 12 17:02:54.940: INFO: Got endpoints: latency-svc-xldfc [4.178462877s] May 12 17:02:55.079: INFO: Created: latency-svc-8nbjt May 12 17:02:55.086: INFO: Got endpoints: latency-svc-8nbjt [4.235359375s] May 12 17:02:55.270: INFO: Created: latency-svc-l58dq May 12 17:02:55.330: INFO: Got endpoints: latency-svc-l58dq [4.351176646s] May 12 17:02:55.843: INFO: Created: latency-svc-x7wtb May 12 17:02:55.857: INFO: Got endpoints: latency-svc-x7wtb [4.675588291s] May 12 17:02:55.916: INFO: Created: latency-svc-d2dl4 May 12 17:02:55.935: INFO: Got endpoints: latency-svc-d2dl4 [4.56961379s] May 12 17:02:55.983: INFO: Created: latency-svc-c9ktq May 12 17:02:56.019: INFO: Got endpoints: latency-svc-c9ktq [4.518728785s] May 12 17:02:56.048: INFO: Created: latency-svc-jhc82 May 12 17:02:56.079: INFO: Got endpoints: latency-svc-jhc82 [4.348639966s] May 12 17:02:56.151: INFO: Created: latency-svc-5mgvc May 12 17:02:56.186: INFO: Got endpoints: latency-svc-5mgvc [4.422626339s] May 12 17:02:56.379: INFO: Created: latency-svc-c6z5p May 12 17:02:56.422: INFO: Got endpoints: latency-svc-c6z5p [4.520830023s] May 12 17:02:56.786: INFO: Created: latency-svc-2n8sg May 12 17:02:56.823: INFO: Got endpoints: latency-svc-2n8sg [4.708514094s] May 12 17:02:56.870: INFO: Created: latency-svc-t9fpk May 12 17:02:57.019: INFO: Got endpoints: latency-svc-t9fpk [4.583212654s] May 12 17:02:57.044: INFO: Created: latency-svc-55wrv May 12 17:02:57.057: INFO: Got endpoints: latency-svc-55wrv [3.790230246s] May 12 17:02:57.086: INFO: Created: latency-svc-lltdg May 12 17:02:57.099: INFO: Got endpoints: latency-svc-lltdg [3.300094549s] May 12 17:02:57.204: INFO: Created: latency-svc-2s6m9 May 12 17:02:57.231: INFO: Got endpoints: latency-svc-2s6m9 [3.065365034s] May 12 17:02:57.471: INFO: Created: latency-svc-bjdmg May 12 17:02:57.537: INFO: Got endpoints: latency-svc-bjdmg [2.830198255s] May 12 17:02:57.647: INFO: Created: latency-svc-dcftc May 12 17:02:57.652: INFO: Got endpoints: latency-svc-dcftc [2.711980266s] May 12 17:02:57.725: INFO: Created: latency-svc-b5z94 May 12 17:02:57.742: INFO: Got endpoints: latency-svc-b5z94 [2.655291193s] May 12 17:02:57.783: INFO: Created: latency-svc-mrk7z May 12 17:02:57.808: INFO: Got endpoints: latency-svc-mrk7z [2.477559088s] May 12 17:02:57.855: INFO: Created: latency-svc-vhq4v May 12 17:02:58.055: INFO: Got endpoints: latency-svc-vhq4v [2.197682405s] May 12 17:02:58.057: INFO: Created: latency-svc-pd6fc May 12 17:02:58.126: INFO: Got endpoints: latency-svc-pd6fc [2.190533664s] May 12 17:02:58.295: INFO: Created: latency-svc-pk7c4 May 12 17:02:58.300: INFO: Got endpoints: latency-svc-pk7c4 [2.281677592s] May 12 17:02:58.331: INFO: Created: latency-svc-xk82p May 12 17:02:58.348: INFO: Got endpoints: latency-svc-xk82p [2.269494443s] May 12 17:02:58.372: INFO: Created: latency-svc-vblx2 May 12 17:02:58.385: INFO: Got endpoints: latency-svc-vblx2 [2.198384081s] May 12 17:02:58.438: INFO: Created: latency-svc-2w5sr May 12 17:02:58.441: INFO: Got endpoints: latency-svc-2w5sr [2.018369299s] May 12 17:02:58.468: INFO: Created: latency-svc-lr8cr May 12 17:02:58.481: INFO: Got endpoints: latency-svc-lr8cr [1.658200693s] May 12 17:02:58.504: INFO: Created: latency-svc-hqd8r May 12 17:02:58.517: INFO: Got endpoints: latency-svc-hqd8r [1.498247057s] May 12 17:02:58.575: INFO: Created: latency-svc-nxj77 May 12 17:02:58.607: INFO: Created: latency-svc-wvz2q May 12 17:02:58.607: INFO: Got endpoints: latency-svc-nxj77 [1.550148334s] May 12 17:02:58.654: INFO: Got endpoints: latency-svc-wvz2q [1.554470149s] May 12 17:02:58.810: INFO: Created: latency-svc-7gkv8 May 12 17:02:58.812: INFO: Got endpoints: latency-svc-7gkv8 [1.580667989s] May 12 17:02:59.068: INFO: Created: latency-svc-s76r4 May 12 17:02:59.078: INFO: Got endpoints: latency-svc-s76r4 [1.540089767s] May 12 17:02:59.119: INFO: Created: latency-svc-xvsp7 May 12 17:02:59.142: INFO: Got endpoints: latency-svc-xvsp7 [1.490080311s] May 12 17:02:59.243: INFO: Created: latency-svc-6kgd9 May 12 17:02:59.245: INFO: Got endpoints: latency-svc-6kgd9 [1.50347056s] May 12 17:02:59.427: INFO: Created: latency-svc-nfsvc May 12 17:02:59.443: INFO: Got endpoints: latency-svc-nfsvc [1.635487589s] May 12 17:02:59.803: INFO: Created: latency-svc-vffh6 May 12 17:02:59.826: INFO: Got endpoints: latency-svc-vffh6 [1.771350387s] May 12 17:02:59.948: INFO: Created: latency-svc-fhtr2 May 12 17:02:59.964: INFO: Got endpoints: latency-svc-fhtr2 [1.83765542s] May 12 17:03:00.139: INFO: Created: latency-svc-sh7lr May 12 17:03:00.357: INFO: Got endpoints: latency-svc-sh7lr [2.056537366s] May 12 17:03:00.438: INFO: Created: latency-svc-h6rzv May 12 17:03:00.571: INFO: Got endpoints: latency-svc-h6rzv [2.222578527s] May 12 17:03:00.571: INFO: Created: latency-svc-7998f May 12 17:03:00.626: INFO: Got endpoints: latency-svc-7998f [2.241650995s] May 12 17:03:00.676: INFO: Created: latency-svc-rwcw7 May 12 17:03:00.684: INFO: Got endpoints: latency-svc-rwcw7 [2.243860488s] May 12 17:03:00.709: INFO: Created: latency-svc-nxjxx May 12 17:03:00.726: INFO: Got endpoints: latency-svc-nxjxx [2.244928473s] May 12 17:03:00.756: INFO: Created: latency-svc-j6vss May 12 17:03:00.851: INFO: Got endpoints: latency-svc-j6vss [2.333983315s] May 12 17:03:00.855: INFO: Created: latency-svc-n65k4 May 12 17:03:00.859: INFO: Got endpoints: latency-svc-n65k4 [2.251251723s] May 12 17:03:00.883: INFO: Created: latency-svc-l7xcl May 12 17:03:00.900: INFO: Got endpoints: latency-svc-l7xcl [2.246512314s] May 12 17:03:00.925: INFO: Created: latency-svc-crdwd May 12 17:03:00.943: INFO: Got endpoints: latency-svc-crdwd [2.130878977s] May 12 17:03:01.019: INFO: Created: latency-svc-lwfk2 May 12 17:03:01.046: INFO: Got endpoints: latency-svc-lwfk2 [1.968263824s] May 12 17:03:01.193: INFO: Created: latency-svc-v2cw4 May 12 17:03:01.207: INFO: Got endpoints: latency-svc-v2cw4 [2.064578557s] May 12 17:03:01.292: INFO: Created: latency-svc-krpxf May 12 17:03:01.510: INFO: Got endpoints: latency-svc-krpxf [2.26477917s] May 12 17:03:01.538: INFO: Created: latency-svc-hs9rf May 12 17:03:01.575: INFO: Got endpoints: latency-svc-hs9rf [2.131588786s] May 12 17:03:01.755: INFO: Created: latency-svc-97q4v May 12 17:03:01.797: INFO: Got endpoints: latency-svc-97q4v [1.970443253s] May 12 17:03:01.946: INFO: Created: latency-svc-ps5vt May 12 17:03:01.963: INFO: Got endpoints: latency-svc-ps5vt [1.999279806s] May 12 17:03:02.162: INFO: Created: latency-svc-wrc4k May 12 17:03:02.165: INFO: Got endpoints: latency-svc-wrc4k [1.808249195s] May 12 17:03:02.240: INFO: Created: latency-svc-mwtkj May 12 17:03:02.245: INFO: Got endpoints: latency-svc-mwtkj [1.674339718s] May 12 17:03:02.309: INFO: Created: latency-svc-5clm2 May 12 17:03:02.360: INFO: Got endpoints: latency-svc-5clm2 [1.733423769s] May 12 17:03:02.360: INFO: Created: latency-svc-b7zr5 May 12 17:03:02.383: INFO: Got endpoints: latency-svc-b7zr5 [1.698877451s] May 12 17:03:02.450: INFO: Created: latency-svc-874kp May 12 17:03:02.474: INFO: Got endpoints: latency-svc-874kp [1.747980892s] May 12 17:03:02.511: INFO: Created: latency-svc-fzp9b May 12 17:03:02.534: INFO: Got endpoints: latency-svc-fzp9b [1.68268603s] May 12 17:03:02.671: INFO: Created: latency-svc-pbkn4 May 12 17:03:02.674: INFO: Got endpoints: latency-svc-pbkn4 [1.815805917s] May 12 17:03:03.091: INFO: Created: latency-svc-mhgr6 May 12 17:03:03.140: INFO: Got endpoints: latency-svc-mhgr6 [2.23913817s] May 12 17:03:03.337: INFO: Created: latency-svc-tmmgh May 12 17:03:03.392: INFO: Got endpoints: latency-svc-tmmgh [2.44858572s] May 12 17:03:03.862: INFO: Created: latency-svc-lt6r6 May 12 17:03:03.865: INFO: Got endpoints: latency-svc-lt6r6 [2.818969752s] May 12 17:03:04.029: INFO: Created: latency-svc-w2nvt May 12 17:03:04.075: INFO: Got endpoints: latency-svc-w2nvt [2.867904666s] May 12 17:03:04.224: INFO: Created: latency-svc-m5ph9 May 12 17:03:04.227: INFO: Got endpoints: latency-svc-m5ph9 [2.717210561s] May 12 17:03:04.366: INFO: Created: latency-svc-2rb6h May 12 17:03:04.393: INFO: Got endpoints: latency-svc-2rb6h [2.818030958s] May 12 17:03:04.510: INFO: Created: latency-svc-vrksq May 12 17:03:04.547: INFO: Got endpoints: latency-svc-vrksq [2.750310875s] May 12 17:03:04.609: INFO: Created: latency-svc-r6tfq May 12 17:03:04.690: INFO: Got endpoints: latency-svc-r6tfq [2.726450844s] May 12 17:03:04.746: INFO: Created: latency-svc-ld57h May 12 17:03:04.765: INFO: Got endpoints: latency-svc-ld57h [2.599478935s] May 12 17:03:04.787: INFO: Created: latency-svc-kpv9s May 12 17:03:04.851: INFO: Got endpoints: latency-svc-kpv9s [2.606042066s] May 12 17:03:04.872: INFO: Created: latency-svc-br64v May 12 17:03:04.903: INFO: Got endpoints: latency-svc-br64v [2.542707362s] May 12 17:03:04.932: INFO: Created: latency-svc-gf4pd May 12 17:03:04.948: INFO: Got endpoints: latency-svc-gf4pd [2.564442089s] May 12 17:03:04.997: INFO: Created: latency-svc-v44ct May 12 17:03:05.034: INFO: Created: latency-svc-q8n7w May 12 17:03:05.034: INFO: Got endpoints: latency-svc-v44ct [2.559539128s] May 12 17:03:05.042: INFO: Got endpoints: latency-svc-q8n7w [2.508096734s] May 12 17:03:05.070: INFO: Created: latency-svc-79zf6 May 12 17:03:05.084: INFO: Got endpoints: latency-svc-79zf6 [2.409917627s] May 12 17:03:05.169: INFO: Created: latency-svc-whp9h May 12 17:03:05.216: INFO: Got endpoints: latency-svc-whp9h [2.07616123s] May 12 17:03:05.217: INFO: Created: latency-svc-vvf4k May 12 17:03:05.250: INFO: Got endpoints: latency-svc-vvf4k [1.857935135s] May 12 17:03:05.331: INFO: Created: latency-svc-p2l2q May 12 17:03:05.343: INFO: Got endpoints: latency-svc-p2l2q [1.47823486s] May 12 17:03:05.370: INFO: Created: latency-svc-8dxjr May 12 17:03:05.379: INFO: Got endpoints: latency-svc-8dxjr [1.304021918s] May 12 17:03:05.412: INFO: Created: latency-svc-b848p May 12 17:03:05.498: INFO: Got endpoints: latency-svc-b848p [1.270767008s] May 12 17:03:05.502: INFO: Created: latency-svc-nd8t5 May 12 17:03:05.572: INFO: Got endpoints: latency-svc-nd8t5 [1.17852262s] May 12 17:03:05.690: INFO: Created: latency-svc-d2lrf May 12 17:03:05.716: INFO: Got endpoints: latency-svc-d2lrf [1.168545401s] May 12 17:03:05.760: INFO: Created: latency-svc-vnzdq May 12 17:03:05.882: INFO: Got endpoints: latency-svc-vnzdq [1.192200772s] May 12 17:03:05.888: INFO: Created: latency-svc-48d2g May 12 17:03:05.892: INFO: Got endpoints: latency-svc-48d2g [1.126895608s] May 12 17:03:05.978: INFO: Created: latency-svc-x8b5g May 12 17:03:06.056: INFO: Got endpoints: latency-svc-x8b5g [1.204721726s] May 12 17:03:06.224: INFO: Created: latency-svc-wtcm8 May 12 17:03:06.480: INFO: Got endpoints: latency-svc-wtcm8 [1.576798633s] May 12 17:03:06.747: INFO: Created: latency-svc-7kpf4 May 12 17:03:07.163: INFO: Got endpoints: latency-svc-7kpf4 [2.215474107s] May 12 17:03:07.564: INFO: Created: latency-svc-nt2m2 May 12 17:03:07.610: INFO: Got endpoints: latency-svc-nt2m2 [2.576217099s] May 12 17:03:09.012: INFO: Created: latency-svc-r8c4x May 12 17:03:09.222: INFO: Got endpoints: latency-svc-r8c4x [4.180460202s] May 12 17:03:09.523: INFO: Created: latency-svc-vshbg May 12 17:03:09.582: INFO: Got endpoints: latency-svc-vshbg [4.497520585s] May 12 17:03:09.894: INFO: Created: latency-svc-8djds May 12 17:03:09.924: INFO: Got endpoints: latency-svc-8djds [4.708357229s] May 12 17:03:10.322: INFO: Created: latency-svc-gwcl8 May 12 17:03:10.804: INFO: Got endpoints: latency-svc-gwcl8 [5.55402154s] May 12 17:03:10.807: INFO: Created: latency-svc-lkwr5 May 12 17:03:10.900: INFO: Got endpoints: latency-svc-lkwr5 [5.556816973s] May 12 17:03:11.107: INFO: Created: latency-svc-8rqxq May 12 17:03:11.116: INFO: Got endpoints: latency-svc-8rqxq [5.736499679s] May 12 17:03:11.362: INFO: Created: latency-svc-l46d9 May 12 17:03:11.385: INFO: Got endpoints: latency-svc-l46d9 [5.887098862s] May 12 17:03:11.556: INFO: Created: latency-svc-c5vr9 May 12 17:03:11.649: INFO: Got endpoints: latency-svc-c5vr9 [6.07739396s] May 12 17:03:11.650: INFO: Created: latency-svc-s8k6v May 12 17:03:11.776: INFO: Got endpoints: latency-svc-s8k6v [6.059698038s] May 12 17:03:12.156: INFO: Created: latency-svc-tdzqn May 12 17:03:12.505: INFO: Got endpoints: latency-svc-tdzqn [6.622736896s] May 12 17:03:12.785: INFO: Created: latency-svc-gpnt7 May 12 17:03:13.067: INFO: Got endpoints: latency-svc-gpnt7 [7.17551127s] May 12 17:03:13.098: INFO: Created: latency-svc-ggmvh May 12 17:03:13.156: INFO: Got endpoints: latency-svc-ggmvh [7.100098728s] May 12 17:03:13.228: INFO: Created: latency-svc-9hhdp May 12 17:03:13.250: INFO: Got endpoints: latency-svc-9hhdp [6.769958235s] May 12 17:03:13.420: INFO: Created: latency-svc-82s5k May 12 17:03:13.464: INFO: Got endpoints: latency-svc-82s5k [6.300685192s] May 12 17:03:13.582: INFO: Created: latency-svc-sdtm4 May 12 17:03:13.651: INFO: Got endpoints: latency-svc-sdtm4 [6.041020954s] May 12 17:03:13.651: INFO: Latencies: [228.508328ms 377.614129ms 605.906665ms 660.708423ms 876.034428ms 1.0861073s 1.101613638s 1.124193986s 1.126895608s 1.168545401s 1.17852262s 1.192200772s 1.193927791s 1.204721726s 1.270767008s 1.276778596s 1.297695422s 1.304021918s 1.328000734s 1.346657523s 1.352427927s 1.396093203s 1.47823486s 1.490080311s 1.498247057s 1.498647199s 1.50347056s 1.5099408s 1.540089767s 1.550148334s 1.554470149s 1.576798633s 1.580667989s 1.601224036s 1.610974851s 1.615258705s 1.628689514s 1.629960714s 1.63449043s 1.635487589s 1.653982433s 1.658200693s 1.65941195s 1.674339718s 1.68268603s 1.69307568s 1.698877451s 1.710857736s 1.723296093s 1.733423769s 1.737469953s 1.747980892s 1.752902636s 1.766371471s 1.767935382s 1.771350387s 1.775645289s 1.779908919s 1.808249195s 1.815805917s 1.830752027s 1.83765542s 1.857935135s 1.862956478s 1.865632483s 1.898033907s 1.912610798s 1.952004385s 1.952794364s 1.96369187s 1.966330295s 1.968263824s 1.970443253s 1.978421197s 1.981143323s 1.991738269s 1.999279806s 2.018369299s 2.040456689s 2.056537366s 2.064578557s 2.065421085s 2.070718135s 2.07616123s 2.084576598s 2.105241692s 2.130878977s 2.131588786s 2.190533664s 2.197682405s 2.198384081s 2.215474107s 2.222578527s 2.23913817s 2.241650995s 2.243860488s 2.244928473s 2.246512314s 2.251251723s 2.262641985s 2.26477917s 2.269494443s 2.281097817s 2.281677592s 2.3146511s 2.333983315s 2.350925347s 2.366525906s 2.409917627s 2.44858572s 2.44876146s 2.477174573s 2.477559088s 2.479032826s 2.508096734s 2.519987851s 2.526976299s 2.542707362s 2.545372759s 2.559539128s 2.564442089s 2.576217099s 2.57910473s 2.592175299s 2.599478935s 2.602418439s 2.606042066s 2.614357456s 2.653937398s 2.65524915s 2.655291193s 2.700011263s 2.711980266s 2.717210561s 2.726450844s 2.742760603s 2.750310875s 2.818030958s 2.818969752s 2.830198255s 2.832777848s 2.838170056s 2.867543254s 2.867904666s 2.988212203s 3.030438612s 3.065365034s 3.078125952s 3.105307641s 3.120906592s 3.169677446s 3.232015654s 3.238973904s 3.294440644s 3.300094549s 3.310047671s 3.333462221s 3.361310741s 3.399541509s 3.44077949s 3.790230246s 3.829532103s 3.853635131s 3.865271118s 3.889153247s 4.010902776s 4.020431476s 4.047044007s 4.062753541s 4.091221989s 4.113089624s 4.114474982s 4.146848297s 4.178462877s 4.180460202s 4.221533971s 4.235359375s 4.348639966s 4.351176646s 4.422626339s 4.497520585s 4.518728785s 4.520830023s 4.56961379s 4.583212654s 4.675588291s 4.708357229s 4.708514094s 5.55402154s 5.556816973s 5.736499679s 5.887098862s 6.041020954s 6.059698038s 6.07739396s 6.300685192s 6.622736896s 6.769958235s 7.100098728s 7.17551127s] May 12 17:03:13.651: INFO: 50 %ile: 2.26477917s May 12 17:03:13.651: INFO: 90 %ile: 4.497520585s May 12 17:03:13.651: INFO: 99 %ile: 7.100098728s May 12 17:03:13.651: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:03:13.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-663" for this suite. • [SLOW TEST:43.958 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":134,"skipped":2221,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:03:13.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-2e31d22d-57d2-4181-a6b1-d1c90c601826 STEP: Creating a pod to test consume configMaps May 12 17:03:14.132: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-17ab3d72-5063-4152-b9d1-36794ae43169" in namespace "projected-6823" to be "success or failure" May 12 17:03:14.172: INFO: Pod "pod-projected-configmaps-17ab3d72-5063-4152-b9d1-36794ae43169": Phase="Pending", Reason="", readiness=false. Elapsed: 40.149277ms May 12 17:03:16.307: INFO: Pod "pod-projected-configmaps-17ab3d72-5063-4152-b9d1-36794ae43169": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174409058s May 12 17:03:18.310: INFO: Pod "pod-projected-configmaps-17ab3d72-5063-4152-b9d1-36794ae43169": Phase="Pending", Reason="", readiness=false. Elapsed: 4.177588662s May 12 17:03:20.451: INFO: Pod "pod-projected-configmaps-17ab3d72-5063-4152-b9d1-36794ae43169": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.318451996s STEP: Saw pod success May 12 17:03:20.451: INFO: Pod "pod-projected-configmaps-17ab3d72-5063-4152-b9d1-36794ae43169" satisfied condition "success or failure" May 12 17:03:20.509: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-17ab3d72-5063-4152-b9d1-36794ae43169 container projected-configmap-volume-test: STEP: delete the pod May 12 17:03:20.796: INFO: Waiting for pod pod-projected-configmaps-17ab3d72-5063-4152-b9d1-36794ae43169 to disappear May 12 17:03:20.868: INFO: Pod pod-projected-configmaps-17ab3d72-5063-4152-b9d1-36794ae43169 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:03:20.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6823" for this suite. • [SLOW TEST:6.940 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2247,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:03:20.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 17:03:22.530: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 17:03:24.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899802, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899802, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899803, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899802, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:03:26.632: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899802, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899802, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899803, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899802, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:03:28.780: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899802, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899802, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899803, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724899802, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 17:03:31.783: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:03:42.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1099" for this suite. STEP: Destroying namespace "webhook-1099-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:22.256 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":136,"skipped":2254,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:03:43.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:03:59.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8884" for this suite. • [SLOW TEST:16.798 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":137,"skipped":2264,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:03:59.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:04:02.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7486" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":138,"skipped":2280,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:04:02.310: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:04:03.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-6830" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":139,"skipped":2351,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:04:03.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-c7bcj in namespace proxy-9445 I0512 17:04:04.160165 7 runners.go:189] Created replication controller with name: proxy-service-c7bcj, namespace: proxy-9445, replica count: 1 I0512 17:04:05.210505 7 runners.go:189] proxy-service-c7bcj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:04:06.210750 7 runners.go:189] proxy-service-c7bcj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:04:07.210960 7 runners.go:189] proxy-service-c7bcj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:04:08.211195 7 runners.go:189] proxy-service-c7bcj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:04:09.211397 7 runners.go:189] proxy-service-c7bcj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:04:10.211605 7 runners.go:189] proxy-service-c7bcj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:04:11.211734 7 runners.go:189] proxy-service-c7bcj Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:04:12.211877 7 runners.go:189] proxy-service-c7bcj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 17:04:13.212114 7 runners.go:189] proxy-service-c7bcj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 17:04:14.212414 7 runners.go:189] proxy-service-c7bcj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 17:04:15.212594 7 runners.go:189] proxy-service-c7bcj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 17:04:16.212811 7 runners.go:189] proxy-service-c7bcj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 17:04:17.213068 7 runners.go:189] proxy-service-c7bcj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 17:04:18.213409 7 runners.go:189] proxy-service-c7bcj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 17:04:19.213578 7 runners.go:189] proxy-service-c7bcj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 17:04:20.213773 7 runners.go:189] proxy-service-c7bcj Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0512 17:04:21.213919 7 runners.go:189] proxy-service-c7bcj Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 17:04:21.242: INFO: setup took 17.376019186s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 12 17:04:21.254: INFO: (0) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:1080/proxy/: test<... (200; 12.318312ms) May 12 17:04:21.254: INFO: (0) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:1080/proxy/: ... (200; 11.305823ms) May 12 17:04:21.255: INFO: (0) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw/proxy/: test (200; 11.513151ms) May 12 17:04:21.258: INFO: (0) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 15.317264ms) May 12 17:04:21.258: INFO: (0) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: test (200; 15.158387ms) May 12 17:04:21.286: INFO: (1) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:1080/proxy/: ... (200; 15.247248ms) May 12 17:04:21.286: INFO: (1) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: test<... (200; 15.181209ms) May 12 17:04:21.286: INFO: (1) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname1/proxy/: foo (200; 15.678718ms) May 12 17:04:21.287: INFO: (1) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname2/proxy/: bar (200; 16.025945ms) May 12 17:04:21.287: INFO: (1) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname1/proxy/: foo (200; 16.109968ms) May 12 17:04:21.287: INFO: (1) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname2/proxy/: tls qux (200; 16.277022ms) May 12 17:04:21.288: INFO: (1) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname2/proxy/: bar (200; 17.514245ms) May 12 17:04:21.288: INFO: (1) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname1/proxy/: tls baz (200; 17.678491ms) May 12 17:04:21.298: INFO: (2) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 9.447884ms) May 12 17:04:21.298: INFO: (2) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:1080/proxy/: test<... (200; 9.202019ms) May 12 17:04:21.298: INFO: (2) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 9.57612ms) May 12 17:04:21.298: INFO: (2) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:460/proxy/: tls baz (200; 9.222842ms) May 12 17:04:21.298: INFO: (2) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 9.245077ms) May 12 17:04:21.298: INFO: (2) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: test (200; 9.487268ms) May 12 17:04:21.298: INFO: (2) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 9.542894ms) May 12 17:04:21.298: INFO: (2) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:1080/proxy/: ... (200; 9.474625ms) May 12 17:04:21.320: INFO: (2) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname1/proxy/: tls baz (200; 31.196573ms) May 12 17:04:21.320: INFO: (2) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname2/proxy/: bar (200; 31.299869ms) May 12 17:04:21.320: INFO: (2) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname2/proxy/: bar (200; 31.388438ms) May 12 17:04:21.320: INFO: (2) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname1/proxy/: foo (200; 31.054506ms) May 12 17:04:21.320: INFO: (2) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname2/proxy/: tls qux (200; 31.438767ms) May 12 17:04:21.321: INFO: (2) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname1/proxy/: foo (200; 32.595762ms) May 12 17:04:21.328: INFO: (3) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:1080/proxy/: ... (200; 6.949085ms) May 12 17:04:21.328: INFO: (3) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 6.988409ms) May 12 17:04:21.328: INFO: (3) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 6.863333ms) May 12 17:04:21.328: INFO: (3) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 7.108359ms) May 12 17:04:21.328: INFO: (3) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:1080/proxy/: test<... (200; 7.061586ms) May 12 17:04:21.329: INFO: (3) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: test (200; 9.605169ms) May 12 17:04:21.331: INFO: (3) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname1/proxy/: foo (200; 9.688861ms) May 12 17:04:21.331: INFO: (3) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname2/proxy/: bar (200; 9.961656ms) May 12 17:04:21.340: INFO: (4) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:460/proxy/: tls baz (200; 8.980976ms) May 12 17:04:21.340: INFO: (4) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 9.066046ms) May 12 17:04:21.340: INFO: (4) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 9.119542ms) May 12 17:04:21.340: INFO: (4) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 9.043682ms) May 12 17:04:21.340: INFO: (4) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: test (200; 9.186858ms) May 12 17:04:21.341: INFO: (4) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:1080/proxy/: ... (200; 9.329827ms) May 12 17:04:21.341: INFO: (4) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname1/proxy/: foo (200; 9.495108ms) May 12 17:04:21.341: INFO: (4) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname1/proxy/: tls baz (200; 9.725622ms) May 12 17:04:21.341: INFO: (4) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:1080/proxy/: test<... (200; 9.922302ms) May 12 17:04:21.341: INFO: (4) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname2/proxy/: bar (200; 9.907018ms) May 12 17:04:21.341: INFO: (4) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname2/proxy/: bar (200; 10.000396ms) May 12 17:04:21.341: INFO: (4) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname1/proxy/: foo (200; 9.924464ms) May 12 17:04:21.341: INFO: (4) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname2/proxy/: tls qux (200; 10.026962ms) May 12 17:04:21.404: INFO: (5) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 62.924114ms) May 12 17:04:21.406: INFO: (5) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw/proxy/: test (200; 64.440367ms) May 12 17:04:21.407: INFO: (5) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname1/proxy/: tls baz (200; 65.203305ms) May 12 17:04:21.407: INFO: (5) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname1/proxy/: foo (200; 65.49124ms) May 12 17:04:21.407: INFO: (5) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname2/proxy/: tls qux (200; 65.513908ms) May 12 17:04:21.407: INFO: (5) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 65.632275ms) May 12 17:04:21.407: INFO: (5) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: test<... (200; 66.149398ms) May 12 17:04:21.408: INFO: (5) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:1080/proxy/: ... (200; 66.059146ms) May 12 17:04:21.408: INFO: (5) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname2/proxy/: bar (200; 66.129251ms) May 12 17:04:21.408: INFO: (5) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname1/proxy/: foo (200; 66.215268ms) May 12 17:04:21.408: INFO: (5) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:462/proxy/: tls qux (200; 66.289716ms) May 12 17:04:21.410: INFO: (5) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 68.570973ms) May 12 17:04:21.411: INFO: (5) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 69.473138ms) May 12 17:04:21.418: INFO: (6) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 6.564872ms) May 12 17:04:21.418: INFO: (6) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:1080/proxy/: test<... (200; 6.682525ms) May 12 17:04:21.418: INFO: (6) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:460/proxy/: tls baz (200; 6.919915ms) May 12 17:04:21.419: INFO: (6) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw/proxy/: test (200; 7.616005ms) May 12 17:04:21.419: INFO: (6) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname1/proxy/: foo (200; 7.673551ms) May 12 17:04:21.419: INFO: (6) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 7.660104ms) May 12 17:04:21.419: INFO: (6) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: ... (200; 7.88753ms) May 12 17:04:21.419: INFO: (6) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname1/proxy/: foo (200; 8.35263ms) May 12 17:04:21.420: INFO: (6) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname2/proxy/: bar (200; 8.605747ms) May 12 17:04:21.420: INFO: (6) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname2/proxy/: bar (200; 8.670236ms) May 12 17:04:21.420: INFO: (6) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname1/proxy/: tls baz (200; 8.818413ms) May 12 17:04:21.420: INFO: (6) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname2/proxy/: tls qux (200; 8.917976ms) May 12 17:04:21.448: INFO: (7) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 28.126679ms) May 12 17:04:21.448: INFO: (7) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: ... (200; 29.196274ms) May 12 17:04:21.449: INFO: (7) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 29.242111ms) May 12 17:04:21.449: INFO: (7) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 29.374982ms) May 12 17:04:21.449: INFO: (7) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw/proxy/: test (200; 29.345866ms) May 12 17:04:21.449: INFO: (7) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:1080/proxy/: test<... (200; 29.398253ms) May 12 17:04:21.450: INFO: (7) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname2/proxy/: tls qux (200; 30.215806ms) May 12 17:04:21.450: INFO: (7) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:462/proxy/: tls qux (200; 30.273992ms) May 12 17:04:21.451: INFO: (7) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname2/proxy/: bar (200; 30.792702ms) May 12 17:04:21.452: INFO: (7) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname1/proxy/: tls baz (200; 31.708022ms) May 12 17:04:21.452: INFO: (7) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname1/proxy/: foo (200; 31.682973ms) May 12 17:04:21.452: INFO: (7) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname1/proxy/: foo (200; 31.740234ms) May 12 17:04:21.452: INFO: (7) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname2/proxy/: bar (200; 31.734434ms) May 12 17:04:21.499: INFO: (8) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: ... (200; 46.988937ms) May 12 17:04:21.500: INFO: (8) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 47.441977ms) May 12 17:04:21.500: INFO: (8) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:1080/proxy/: test<... (200; 48.009571ms) May 12 17:04:21.500: INFO: (8) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:462/proxy/: tls qux (200; 47.88986ms) May 12 17:04:21.501: INFO: (8) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw/proxy/: test (200; 48.376707ms) May 12 17:04:21.503: INFO: (8) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname1/proxy/: tls baz (200; 50.650997ms) May 12 17:04:21.503: INFO: (8) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname2/proxy/: bar (200; 50.447568ms) May 12 17:04:21.503: INFO: (8) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname2/proxy/: tls qux (200; 50.125036ms) May 12 17:04:21.503: INFO: (8) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname1/proxy/: foo (200; 50.85293ms) May 12 17:04:21.503: INFO: (8) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname1/proxy/: foo (200; 50.300822ms) May 12 17:04:21.503: INFO: (8) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname2/proxy/: bar (200; 51.040885ms) May 12 17:04:21.566: INFO: (9) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:1080/proxy/: test<... (200; 63.044044ms) May 12 17:04:21.567: INFO: (9) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 63.164236ms) May 12 17:04:21.567: INFO: (9) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:460/proxy/: tls baz (200; 63.178349ms) May 12 17:04:21.567: INFO: (9) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 63.104375ms) May 12 17:04:21.567: INFO: (9) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: test (200; 63.225468ms) May 12 17:04:21.567: INFO: (9) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:462/proxy/: tls qux (200; 63.303615ms) May 12 17:04:21.567: INFO: (9) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 63.394731ms) May 12 17:04:21.567: INFO: (9) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 63.381119ms) May 12 17:04:21.567: INFO: (9) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:1080/proxy/: ... (200; 63.392834ms) May 12 17:04:21.568: INFO: (9) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname2/proxy/: bar (200; 64.916031ms) May 12 17:04:21.569: INFO: (9) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname2/proxy/: tls qux (200; 65.545046ms) May 12 17:04:21.570: INFO: (9) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname1/proxy/: foo (200; 66.809443ms) May 12 17:04:21.570: INFO: (9) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname1/proxy/: tls baz (200; 66.772613ms) May 12 17:04:21.570: INFO: (9) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname2/proxy/: bar (200; 66.940698ms) May 12 17:04:21.570: INFO: (9) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname1/proxy/: foo (200; 66.921133ms) May 12 17:04:21.597: INFO: (10) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw/proxy/: test (200; 26.78479ms) May 12 17:04:21.618: INFO: (10) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:460/proxy/: tls baz (200; 46.919777ms) May 12 17:04:21.618: INFO: (10) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 47.048913ms) May 12 17:04:21.618: INFO: (10) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 46.97615ms) May 12 17:04:21.618: INFO: (10) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:1080/proxy/: ... (200; 47.164282ms) May 12 17:04:21.618: INFO: (10) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: test<... (200; 47.416906ms) May 12 17:04:21.618: INFO: (10) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 47.461217ms) May 12 17:04:21.618: INFO: (10) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname2/proxy/: bar (200; 47.625024ms) May 12 17:04:21.619: INFO: (10) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname1/proxy/: tls baz (200; 47.836824ms) May 12 17:04:21.619: INFO: (10) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname2/proxy/: bar (200; 47.976547ms) May 12 17:04:21.619: INFO: (10) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname2/proxy/: tls qux (200; 48.086513ms) May 12 17:04:21.619: INFO: (10) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:462/proxy/: tls qux (200; 48.328964ms) May 12 17:04:21.620: INFO: (10) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname1/proxy/: foo (200; 49.389841ms) May 12 17:04:21.621: INFO: (10) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname1/proxy/: foo (200; 49.949773ms) May 12 17:04:21.756: INFO: (11) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:462/proxy/: tls qux (200; 134.292804ms) May 12 17:04:21.756: INFO: (11) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 134.504688ms) May 12 17:04:21.756: INFO: (11) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 134.770794ms) May 12 17:04:21.756: INFO: (11) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw/proxy/: test (200; 133.950103ms) May 12 17:04:21.756: INFO: (11) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:1080/proxy/: ... (200; 135.019055ms) May 12 17:04:21.756: INFO: (11) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname1/proxy/: foo (200; 134.167234ms) May 12 17:04:21.756: INFO: (11) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname1/proxy/: tls baz (200; 134.040778ms) May 12 17:04:21.756: INFO: (11) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 134.303446ms) May 12 17:04:21.756: INFO: (11) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:1080/proxy/: test<... (200; 134.490578ms) May 12 17:04:21.756: INFO: (11) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname1/proxy/: foo (200; 134.680053ms) May 12 17:04:21.756: INFO: (11) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname2/proxy/: tls qux (200; 134.618732ms) May 12 17:04:21.756: INFO: (11) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:460/proxy/: tls baz (200; 134.747325ms) May 12 17:04:21.756: INFO: (11) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: ... (200; 15.86731ms) May 12 17:04:21.772: INFO: (12) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw/proxy/: test (200; 15.686324ms) May 12 17:04:21.772: INFO: (12) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 16.089307ms) May 12 17:04:21.772: INFO: (12) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 15.98782ms) May 12 17:04:21.772: INFO: (12) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:1080/proxy/: test<... (200; 16.053421ms) May 12 17:04:21.772: INFO: (12) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 15.972847ms) May 12 17:04:21.772: INFO: (12) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:462/proxy/: tls qux (200; 16.076596ms) May 12 17:04:21.772: INFO: (12) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 15.947634ms) May 12 17:04:21.772: INFO: (12) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:460/proxy/: tls baz (200; 15.930696ms) May 12 17:04:21.773: INFO: (12) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname2/proxy/: bar (200; 17.176429ms) May 12 17:04:21.773: INFO: (12) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname1/proxy/: foo (200; 17.011957ms) May 12 17:04:21.773: INFO: (12) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: test<... (200; 64.418307ms) May 12 17:04:21.839: INFO: (13) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: ... (200; 65.048314ms) May 12 17:04:21.839: INFO: (13) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:460/proxy/: tls baz (200; 65.09601ms) May 12 17:04:21.839: INFO: (13) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:462/proxy/: tls qux (200; 65.198728ms) May 12 17:04:21.840: INFO: (13) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw/proxy/: test (200; 65.573367ms) May 12 17:04:21.930: INFO: (13) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname1/proxy/: foo (200; 155.79016ms) May 12 17:04:21.930: INFO: (13) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname2/proxy/: bar (200; 155.930121ms) May 12 17:04:21.930: INFO: (13) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname1/proxy/: foo (200; 155.915636ms) May 12 17:04:21.930: INFO: (13) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname2/proxy/: tls qux (200; 156.009149ms) May 12 17:04:21.930: INFO: (13) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname1/proxy/: tls baz (200; 156.156634ms) May 12 17:04:21.930: INFO: (13) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname2/proxy/: bar (200; 156.192407ms) May 12 17:04:21.940: INFO: (14) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:1080/proxy/: test<... (200; 8.845584ms) May 12 17:04:21.940: INFO: (14) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 9.247398ms) May 12 17:04:21.940: INFO: (14) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 9.172208ms) May 12 17:04:21.940: INFO: (14) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 9.252958ms) May 12 17:04:21.940: INFO: (14) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw/proxy/: test (200; 9.555176ms) May 12 17:04:21.940: INFO: (14) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 9.639151ms) May 12 17:04:21.940: INFO: (14) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:460/proxy/: tls baz (200; 9.821984ms) May 12 17:04:21.940: INFO: (14) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: ... (200; 9.909915ms) May 12 17:04:21.941: INFO: (14) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname1/proxy/: foo (200; 10.340576ms) May 12 17:04:21.941: INFO: (14) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname2/proxy/: bar (200; 10.309664ms) May 12 17:04:21.941: INFO: (14) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname1/proxy/: tls baz (200; 10.343913ms) May 12 17:04:21.941: INFO: (14) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname1/proxy/: foo (200; 10.77133ms) May 12 17:04:21.942: INFO: (14) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname2/proxy/: bar (200; 11.247406ms) May 12 17:04:21.967: INFO: (15) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 25.239514ms) May 12 17:04:21.967: INFO: (15) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 25.290987ms) May 12 17:04:21.967: INFO: (15) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 25.360043ms) May 12 17:04:21.967: INFO: (15) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:462/proxy/: tls qux (200; 25.211827ms) May 12 17:04:21.967: INFO: (15) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:1080/proxy/: test<... (200; 25.262552ms) May 12 17:04:21.967: INFO: (15) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 25.380705ms) May 12 17:04:21.967: INFO: (15) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:1080/proxy/: ... (200; 25.266266ms) May 12 17:04:21.967: INFO: (15) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: test (200; 25.222692ms) May 12 17:04:21.967: INFO: (15) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:460/proxy/: tls baz (200; 25.545215ms) May 12 17:04:21.968: INFO: (15) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname1/proxy/: foo (200; 26.185462ms) May 12 17:04:21.968: INFO: (15) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname1/proxy/: foo (200; 26.22785ms) May 12 17:04:21.968: INFO: (15) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname1/proxy/: tls baz (200; 26.219481ms) May 12 17:04:21.968: INFO: (15) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname2/proxy/: bar (200; 26.248891ms) May 12 17:04:21.968: INFO: (15) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname2/proxy/: bar (200; 26.236102ms) May 12 17:04:21.970: INFO: (15) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname2/proxy/: tls qux (200; 28.052467ms) May 12 17:04:22.004: INFO: (16) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 33.669913ms) May 12 17:04:22.004: INFO: (16) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 33.884702ms) May 12 17:04:22.004: INFO: (16) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:460/proxy/: tls baz (200; 33.92054ms) May 12 17:04:22.004: INFO: (16) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:1080/proxy/: ... (200; 33.947771ms) May 12 17:04:22.004: INFO: (16) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: test<... (200; 34.109374ms) May 12 17:04:22.004: INFO: (16) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw/proxy/: test (200; 34.017185ms) May 12 17:04:22.004: INFO: (16) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 34.048817ms) May 12 17:04:22.004: INFO: (16) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 34.116963ms) May 12 17:04:22.007: INFO: (16) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname2/proxy/: bar (200; 36.622293ms) May 12 17:04:22.007: INFO: (16) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname1/proxy/: foo (200; 36.73939ms) May 12 17:04:22.007: INFO: (16) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname2/proxy/: bar (200; 36.710882ms) May 12 17:04:22.007: INFO: (16) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname1/proxy/: tls baz (200; 36.753479ms) May 12 17:04:22.007: INFO: (16) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname2/proxy/: tls qux (200; 36.791916ms) May 12 17:04:22.080: INFO: (16) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname1/proxy/: foo (200; 109.507955ms) May 12 17:04:22.083: INFO: (17) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 3.1326ms) May 12 17:04:22.084: INFO: (17) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw/proxy/: test (200; 3.889913ms) May 12 17:04:22.084: INFO: (17) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:1080/proxy/: ... (200; 4.315448ms) May 12 17:04:22.084: INFO: (17) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 4.218212ms) May 12 17:04:22.084: INFO: (17) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:1080/proxy/: test<... (200; 4.238355ms) May 12 17:04:22.084: INFO: (17) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:460/proxy/: tls baz (200; 4.375227ms) May 12 17:04:22.084: INFO: (17) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:462/proxy/: tls qux (200; 4.55506ms) May 12 17:04:22.084: INFO: (17) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 4.50572ms) May 12 17:04:22.084: INFO: (17) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: test (200; 41.917468ms) May 12 17:04:22.169: INFO: (18) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:460/proxy/: tls baz (200; 41.931082ms) May 12 17:04:22.169: INFO: (18) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 41.902615ms) May 12 17:04:22.169: INFO: (18) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: ... (200; 41.944242ms) May 12 17:04:22.169: INFO: (18) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname1/proxy/: foo (200; 42.130073ms) May 12 17:04:22.169: INFO: (18) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:462/proxy/: tls qux (200; 42.07511ms) May 12 17:04:22.169: INFO: (18) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname2/proxy/: bar (200; 42.153346ms) May 12 17:04:22.169: INFO: (18) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:1080/proxy/: test<... (200; 42.10316ms) May 12 17:04:22.268: INFO: (18) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname1/proxy/: foo (200; 140.421281ms) May 12 17:04:22.268: INFO: (18) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname2/proxy/: bar (200; 140.454185ms) May 12 17:04:22.330: INFO: (19) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:443/proxy/: test (200; 62.410664ms) May 12 17:04:22.330: INFO: (19) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:1080/proxy/: test<... (200; 62.350804ms) May 12 17:04:22.330: INFO: (19) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 62.390489ms) May 12 17:04:22.330: INFO: (19) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 62.465049ms) May 12 17:04:22.331: INFO: (19) /api/v1/namespaces/proxy-9445/pods/http:proxy-service-c7bcj-6c9fw:1080/proxy/: ... (200; 63.148965ms) May 12 17:04:22.331: INFO: (19) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:160/proxy/: foo (200; 63.162628ms) May 12 17:04:22.333: INFO: (19) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname1/proxy/: foo (200; 65.359144ms) May 12 17:04:22.333: INFO: (19) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname2/proxy/: bar (200; 65.291163ms) May 12 17:04:22.333: INFO: (19) /api/v1/namespaces/proxy-9445/services/proxy-service-c7bcj:portname2/proxy/: bar (200; 65.372385ms) May 12 17:04:22.333: INFO: (19) /api/v1/namespaces/proxy-9445/services/http:proxy-service-c7bcj:portname1/proxy/: foo (200; 65.28726ms) May 12 17:04:22.337: INFO: (19) /api/v1/namespaces/proxy-9445/pods/proxy-service-c7bcj-6c9fw:162/proxy/: bar (200; 69.063481ms) May 12 17:04:22.338: INFO: (19) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:462/proxy/: tls qux (200; 69.803526ms) May 12 17:04:22.338: INFO: (19) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname2/proxy/: tls qux (200; 69.564234ms) May 12 17:04:22.338: INFO: (19) /api/v1/namespaces/proxy-9445/pods/https:proxy-service-c7bcj-6c9fw:460/proxy/: tls baz (200; 69.734475ms) May 12 17:04:22.338: INFO: (19) /api/v1/namespaces/proxy-9445/services/https:proxy-service-c7bcj:tlsportname1/proxy/: tls baz (200; 69.693185ms) STEP: deleting ReplicationController proxy-service-c7bcj in namespace proxy-9445, will wait for the garbage collector to delete the pods May 12 17:04:22.687: INFO: Deleting ReplicationController proxy-service-c7bcj took: 203.552104ms May 12 17:04:23.187: INFO: Terminating ReplicationController proxy-service-c7bcj pods took: 500.242041ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:04:29.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9445" for this suite. • [SLOW TEST:26.841 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":140,"skipped":2404,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:04:30.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-a07ea0ef-f7d4-40ce-9960-0f1cb94d8755 STEP: Creating a pod to test consume secrets May 12 17:04:31.677: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3faeca7e-4173-4755-90b2-1b0b85a9d3ba" in namespace "projected-276" to be "success or failure" May 12 17:04:31.841: INFO: Pod "pod-projected-secrets-3faeca7e-4173-4755-90b2-1b0b85a9d3ba": Phase="Pending", Reason="", readiness=false. Elapsed: 164.441399ms May 12 17:04:34.038: INFO: Pod "pod-projected-secrets-3faeca7e-4173-4755-90b2-1b0b85a9d3ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.360702851s May 12 17:04:36.092: INFO: Pod "pod-projected-secrets-3faeca7e-4173-4755-90b2-1b0b85a9d3ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.415510917s May 12 17:04:38.266: INFO: Pod "pod-projected-secrets-3faeca7e-4173-4755-90b2-1b0b85a9d3ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.589546966s May 12 17:04:40.270: INFO: Pod "pod-projected-secrets-3faeca7e-4173-4755-90b2-1b0b85a9d3ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.593068192s STEP: Saw pod success May 12 17:04:40.270: INFO: Pod "pod-projected-secrets-3faeca7e-4173-4755-90b2-1b0b85a9d3ba" satisfied condition "success or failure" May 12 17:04:40.597: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-3faeca7e-4173-4755-90b2-1b0b85a9d3ba container projected-secret-volume-test: STEP: delete the pod May 12 17:04:40.843: INFO: Waiting for pod pod-projected-secrets-3faeca7e-4173-4755-90b2-1b0b85a9d3ba to disappear May 12 17:04:40.852: INFO: Pod pod-projected-secrets-3faeca7e-4173-4755-90b2-1b0b85a9d3ba no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:04:40.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-276" for this suite. • [SLOW TEST:10.611 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2419,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:04:40.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 12 17:04:47.374: INFO: &Pod{ObjectMeta:{send-events-edc27a4f-5e7d-482f-bc20-4e0842c7ea52 events-4492 /api/v1/namespaces/events-4492/pods/send-events-edc27a4f-5e7d-482f-bc20-4e0842c7ea52 1bec8322-a079-4c16-860a-33298e4aa4a3 15622492 0 2020-05-12 17:04:40 +0000 UTC map[name:foo time:991735073] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vvd4z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vvd4z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vvd4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:04:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:04:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:04:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:04:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.18,StartTime:2020-05-12 17:04:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 17:04:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://6d44a47965a4e892d289305b8fc677f897f0d1d56b89cbb94b74a67a99fccc45,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 12 17:04:49.674: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 12 17:04:51.678: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:04:51.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4492" for this suite. • [SLOW TEST:11.037 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":142,"skipped":2445,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:04:51.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 12 17:04:52.499: INFO: Waiting up to 5m0s for pod "pod-fafc9c2d-f7f5-4228-b1a3-dbfe91ec423a" in namespace "emptydir-7857" to be "success or failure" May 12 17:04:52.511: INFO: Pod "pod-fafc9c2d-f7f5-4228-b1a3-dbfe91ec423a": Phase="Pending", Reason="", readiness=false. Elapsed: 11.701211ms May 12 17:04:54.578: INFO: Pod "pod-fafc9c2d-f7f5-4228-b1a3-dbfe91ec423a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078194346s May 12 17:04:56.581: INFO: Pod "pod-fafc9c2d-f7f5-4228-b1a3-dbfe91ec423a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081932675s May 12 17:04:58.931: INFO: Pod "pod-fafc9c2d-f7f5-4228-b1a3-dbfe91ec423a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.431192313s STEP: Saw pod success May 12 17:04:58.931: INFO: Pod "pod-fafc9c2d-f7f5-4228-b1a3-dbfe91ec423a" satisfied condition "success or failure" May 12 17:04:58.933: INFO: Trying to get logs from node jerma-worker pod pod-fafc9c2d-f7f5-4228-b1a3-dbfe91ec423a container test-container: STEP: delete the pod May 12 17:04:59.321: INFO: Waiting for pod pod-fafc9c2d-f7f5-4228-b1a3-dbfe91ec423a to disappear May 12 17:04:59.416: INFO: Pod pod-fafc9c2d-f7f5-4228-b1a3-dbfe91ec423a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:04:59.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7857" for this suite. • [SLOW TEST:7.636 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2447,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:04:59.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-e5e1d0c6-0d7f-4a5f-992f-043bb855b829 STEP: Creating configMap with name cm-test-opt-upd-0c99751a-73cb-40f1-bdef-525a7b4e4150 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-e5e1d0c6-0d7f-4a5f-992f-043bb855b829 STEP: Updating configmap cm-test-opt-upd-0c99751a-73cb-40f1-bdef-525a7b4e4150 STEP: Creating configMap with name cm-test-opt-create-a10db827-b1c2-48db-97a1-430035a580f4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:06:28.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2974" for this suite. • [SLOW TEST:88.595 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2448,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:06:28.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 12 17:06:33.418: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:06:33.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9101" for this suite. • [SLOW TEST:5.376 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2491,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:06:33.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:06:33.730: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-35123324-0bf7-4f5f-a629-143ee1629d5e" in namespace "security-context-test-8474" to be "success or failure" May 12 17:06:33.903: INFO: Pod "busybox-readonly-false-35123324-0bf7-4f5f-a629-143ee1629d5e": Phase="Pending", Reason="", readiness=false. Elapsed: 172.876346ms May 12 17:06:35.906: INFO: Pod "busybox-readonly-false-35123324-0bf7-4f5f-a629-143ee1629d5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175796337s May 12 17:06:37.909: INFO: Pod "busybox-readonly-false-35123324-0bf7-4f5f-a629-143ee1629d5e": Phase="Running", Reason="", readiness=true. Elapsed: 4.178566195s May 12 17:06:39.913: INFO: Pod "busybox-readonly-false-35123324-0bf7-4f5f-a629-143ee1629d5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.18312443s May 12 17:06:39.913: INFO: Pod "busybox-readonly-false-35123324-0bf7-4f5f-a629-143ee1629d5e" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:06:39.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8474" for this suite. • [SLOW TEST:6.420 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2498,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:06:39.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7460 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 12 17:06:40.070: INFO: Found 0 stateful pods, waiting for 3 May 12 17:06:50.075: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 17:06:50.075: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 17:06:50.075: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 12 17:07:00.073: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 17:07:00.073: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 17:07:00.073: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 12 17:07:00.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7460 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 17:07:00.325: INFO: stderr: "I0512 17:07:00.215644 2913 log.go:172] (0xc000b1a8f0) (0xc0008e8460) Create stream\nI0512 17:07:00.215720 2913 log.go:172] (0xc000b1a8f0) (0xc0008e8460) Stream added, broadcasting: 1\nI0512 17:07:00.219091 2913 log.go:172] (0xc000b1a8f0) Reply frame received for 1\nI0512 17:07:00.219116 2913 log.go:172] (0xc000b1a8f0) (0xc0007edb80) Create stream\nI0512 17:07:00.219124 2913 log.go:172] (0xc000b1a8f0) (0xc0007edb80) Stream added, broadcasting: 3\nI0512 17:07:00.220039 2913 log.go:172] (0xc000b1a8f0) Reply frame received for 3\nI0512 17:07:00.220070 2913 log.go:172] (0xc000b1a8f0) (0xc000708780) Create stream\nI0512 17:07:00.220078 2913 log.go:172] (0xc000b1a8f0) (0xc000708780) Stream added, broadcasting: 5\nI0512 17:07:00.220941 2913 log.go:172] (0xc000b1a8f0) Reply frame received for 5\nI0512 17:07:00.283516 2913 log.go:172] (0xc000b1a8f0) Data frame received for 5\nI0512 17:07:00.283548 2913 log.go:172] (0xc000708780) (5) Data frame handling\nI0512 17:07:00.283568 2913 log.go:172] (0xc000708780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 17:07:00.317585 2913 log.go:172] (0xc000b1a8f0) Data frame received for 3\nI0512 17:07:00.317634 2913 log.go:172] (0xc0007edb80) (3) Data frame handling\nI0512 17:07:00.317670 2913 log.go:172] (0xc0007edb80) (3) Data frame sent\nI0512 17:07:00.318035 2913 log.go:172] (0xc000b1a8f0) Data frame received for 3\nI0512 17:07:00.318080 2913 log.go:172] (0xc0007edb80) (3) Data frame handling\nI0512 17:07:00.318121 2913 log.go:172] (0xc000b1a8f0) Data frame received for 5\nI0512 17:07:00.318140 2913 log.go:172] (0xc000708780) (5) Data frame handling\nI0512 17:07:00.320018 2913 log.go:172] (0xc000b1a8f0) Data frame received for 1\nI0512 17:07:00.320047 2913 log.go:172] (0xc0008e8460) (1) Data frame handling\nI0512 17:07:00.320067 2913 log.go:172] (0xc0008e8460) (1) Data frame sent\nI0512 17:07:00.320089 2913 log.go:172] (0xc000b1a8f0) (0xc0008e8460) Stream removed, broadcasting: 1\nI0512 17:07:00.320131 2913 log.go:172] (0xc000b1a8f0) Go away received\nI0512 17:07:00.320679 2913 log.go:172] (0xc000b1a8f0) (0xc0008e8460) Stream removed, broadcasting: 1\nI0512 17:07:00.320702 2913 log.go:172] (0xc000b1a8f0) (0xc0007edb80) Stream removed, broadcasting: 3\nI0512 17:07:00.320715 2913 log.go:172] (0xc000b1a8f0) (0xc000708780) Stream removed, broadcasting: 5\n" May 12 17:07:00.326: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 17:07:00.326: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 12 17:07:10.510: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 12 17:07:20.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7460 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 17:07:20.776: INFO: stderr: "I0512 17:07:20.683943 2932 log.go:172] (0xc000944bb0) (0xc000706280) Create stream\nI0512 17:07:20.684000 2932 log.go:172] (0xc000944bb0) (0xc000706280) Stream added, broadcasting: 1\nI0512 17:07:20.686573 2932 log.go:172] (0xc000944bb0) Reply frame received for 1\nI0512 17:07:20.686628 2932 log.go:172] (0xc000944bb0) (0xc00086c000) Create stream\nI0512 17:07:20.686649 2932 log.go:172] (0xc000944bb0) (0xc00086c000) Stream added, broadcasting: 3\nI0512 17:07:20.687497 2932 log.go:172] (0xc000944bb0) Reply frame received for 3\nI0512 17:07:20.687519 2932 log.go:172] (0xc000944bb0) (0xc000706320) Create stream\nI0512 17:07:20.687526 2932 log.go:172] (0xc000944bb0) (0xc000706320) Stream added, broadcasting: 5\nI0512 17:07:20.688239 2932 log.go:172] (0xc000944bb0) Reply frame received for 5\nI0512 17:07:20.772952 2932 log.go:172] (0xc000944bb0) Data frame received for 3\nI0512 17:07:20.772973 2932 log.go:172] (0xc00086c000) (3) Data frame handling\nI0512 17:07:20.772980 2932 log.go:172] (0xc00086c000) (3) Data frame sent\nI0512 17:07:20.772985 2932 log.go:172] (0xc000944bb0) Data frame received for 3\nI0512 17:07:20.772989 2932 log.go:172] (0xc00086c000) (3) Data frame handling\nI0512 17:07:20.773018 2932 log.go:172] (0xc000944bb0) Data frame received for 5\nI0512 17:07:20.773032 2932 log.go:172] (0xc000706320) (5) Data frame handling\nI0512 17:07:20.773041 2932 log.go:172] (0xc000706320) (5) Data frame sent\nI0512 17:07:20.773052 2932 log.go:172] (0xc000944bb0) Data frame received for 5\nI0512 17:07:20.773061 2932 log.go:172] (0xc000706320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 17:07:20.773695 2932 log.go:172] (0xc000944bb0) Data frame received for 1\nI0512 17:07:20.773709 2932 log.go:172] (0xc000706280) (1) Data frame handling\nI0512 17:07:20.773717 2932 log.go:172] (0xc000706280) (1) Data frame sent\nI0512 17:07:20.773779 2932 log.go:172] (0xc000944bb0) (0xc000706280) Stream removed, broadcasting: 1\nI0512 17:07:20.773811 2932 log.go:172] (0xc000944bb0) Go away received\nI0512 17:07:20.773990 2932 log.go:172] (0xc000944bb0) (0xc000706280) Stream removed, broadcasting: 1\nI0512 17:07:20.774000 2932 log.go:172] (0xc000944bb0) (0xc00086c000) Stream removed, broadcasting: 3\nI0512 17:07:20.774004 2932 log.go:172] (0xc000944bb0) (0xc000706320) Stream removed, broadcasting: 5\n" May 12 17:07:20.776: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 17:07:20.776: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 17:07:30.792: INFO: Waiting for StatefulSet statefulset-7460/ss2 to complete update May 12 17:07:30.792: INFO: Waiting for Pod statefulset-7460/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 12 17:07:30.792: INFO: Waiting for Pod statefulset-7460/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 12 17:07:40.796: INFO: Waiting for StatefulSet statefulset-7460/ss2 to complete update May 12 17:07:40.796: INFO: Waiting for Pod statefulset-7460/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision May 12 17:08:00.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7460 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 17:08:01.487: INFO: stderr: "I0512 17:08:01.366661 2955 log.go:172] (0xc0006f6790) (0xc0009b4000) Create stream\nI0512 17:08:01.366712 2955 log.go:172] (0xc0006f6790) (0xc0009b4000) Stream added, broadcasting: 1\nI0512 17:08:01.369084 2955 log.go:172] (0xc0006f6790) Reply frame received for 1\nI0512 17:08:01.369276 2955 log.go:172] (0xc0006f6790) (0xc0006ddcc0) Create stream\nI0512 17:08:01.369297 2955 log.go:172] (0xc0006f6790) (0xc0006ddcc0) Stream added, broadcasting: 3\nI0512 17:08:01.370208 2955 log.go:172] (0xc0006f6790) Reply frame received for 3\nI0512 17:08:01.370240 2955 log.go:172] (0xc0006f6790) (0xc0009b40a0) Create stream\nI0512 17:08:01.370250 2955 log.go:172] (0xc0006f6790) (0xc0009b40a0) Stream added, broadcasting: 5\nI0512 17:08:01.371090 2955 log.go:172] (0xc0006f6790) Reply frame received for 5\nI0512 17:08:01.423167 2955 log.go:172] (0xc0006f6790) Data frame received for 5\nI0512 17:08:01.423191 2955 log.go:172] (0xc0009b40a0) (5) Data frame handling\nI0512 17:08:01.423209 2955 log.go:172] (0xc0009b40a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 17:08:01.481057 2955 log.go:172] (0xc0006f6790) Data frame received for 3\nI0512 17:08:01.481085 2955 log.go:172] (0xc0006ddcc0) (3) Data frame handling\nI0512 17:08:01.481094 2955 log.go:172] (0xc0006ddcc0) (3) Data frame sent\nI0512 17:08:01.481978 2955 log.go:172] (0xc0006f6790) Data frame received for 5\nI0512 17:08:01.481991 2955 log.go:172] (0xc0009b40a0) (5) Data frame handling\nI0512 17:08:01.482072 2955 log.go:172] (0xc0006f6790) Data frame received for 3\nI0512 17:08:01.482082 2955 log.go:172] (0xc0006ddcc0) (3) Data frame handling\nI0512 17:08:01.483318 2955 log.go:172] (0xc0006f6790) Data frame received for 1\nI0512 17:08:01.483330 2955 log.go:172] (0xc0009b4000) (1) Data frame handling\nI0512 17:08:01.483343 2955 log.go:172] (0xc0009b4000) (1) Data frame sent\nI0512 17:08:01.483354 2955 log.go:172] (0xc0006f6790) (0xc0009b4000) Stream removed, broadcasting: 1\nI0512 17:08:01.483370 2955 log.go:172] (0xc0006f6790) Go away received\nI0512 17:08:01.483642 2955 log.go:172] (0xc0006f6790) (0xc0009b4000) Stream removed, broadcasting: 1\nI0512 17:08:01.483658 2955 log.go:172] (0xc0006f6790) (0xc0006ddcc0) Stream removed, broadcasting: 3\nI0512 17:08:01.483667 2955 log.go:172] (0xc0006f6790) (0xc0009b40a0) Stream removed, broadcasting: 5\n" May 12 17:08:01.487: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 17:08:01.487: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 17:08:11.698: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 12 17:08:22.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7460 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 17:08:22.768: INFO: stderr: "I0512 17:08:22.672182 2975 log.go:172] (0xc0005204d0) (0xc0005e3d60) Create stream\nI0512 17:08:22.672240 2975 log.go:172] (0xc0005204d0) (0xc0005e3d60) Stream added, broadcasting: 1\nI0512 17:08:22.674444 2975 log.go:172] (0xc0005204d0) Reply frame received for 1\nI0512 17:08:22.674482 2975 log.go:172] (0xc0005204d0) (0xc000140820) Create stream\nI0512 17:08:22.674495 2975 log.go:172] (0xc0005204d0) (0xc000140820) Stream added, broadcasting: 3\nI0512 17:08:22.675300 2975 log.go:172] (0xc0005204d0) Reply frame received for 3\nI0512 17:08:22.675331 2975 log.go:172] (0xc0005204d0) (0xc0007137c0) Create stream\nI0512 17:08:22.675340 2975 log.go:172] (0xc0005204d0) (0xc0007137c0) Stream added, broadcasting: 5\nI0512 17:08:22.676010 2975 log.go:172] (0xc0005204d0) Reply frame received for 5\nI0512 17:08:22.728775 2975 log.go:172] (0xc0005204d0) Data frame received for 5\nI0512 17:08:22.728801 2975 log.go:172] (0xc0007137c0) (5) Data frame handling\nI0512 17:08:22.728817 2975 log.go:172] (0xc0007137c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 17:08:22.756944 2975 log.go:172] (0xc0005204d0) Data frame received for 3\nI0512 17:08:22.756976 2975 log.go:172] (0xc000140820) (3) Data frame handling\nI0512 17:08:22.756987 2975 log.go:172] (0xc000140820) (3) Data frame sent\nI0512 17:08:22.757009 2975 log.go:172] (0xc0005204d0) Data frame received for 5\nI0512 17:08:22.757018 2975 log.go:172] (0xc0007137c0) (5) Data frame handling\nI0512 17:08:22.757546 2975 log.go:172] (0xc0005204d0) Data frame received for 3\nI0512 17:08:22.757579 2975 log.go:172] (0xc000140820) (3) Data frame handling\nI0512 17:08:22.763025 2975 log.go:172] (0xc0005204d0) Data frame received for 1\nI0512 17:08:22.763060 2975 log.go:172] (0xc0005e3d60) (1) Data frame handling\nI0512 17:08:22.763088 2975 log.go:172] (0xc0005e3d60) (1) Data frame sent\nI0512 17:08:22.763118 2975 log.go:172] (0xc0005204d0) (0xc0005e3d60) Stream removed, broadcasting: 1\nI0512 17:08:22.763230 2975 log.go:172] (0xc0005204d0) Go away received\nI0512 17:08:22.763581 2975 log.go:172] (0xc0005204d0) (0xc0005e3d60) Stream removed, broadcasting: 1\nI0512 17:08:22.763607 2975 log.go:172] (0xc0005204d0) (0xc000140820) Stream removed, broadcasting: 3\nI0512 17:08:22.763623 2975 log.go:172] (0xc0005204d0) (0xc0007137c0) Stream removed, broadcasting: 5\n" May 12 17:08:22.768: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 17:08:22.768: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 17:08:33.222: INFO: Waiting for StatefulSet statefulset-7460/ss2 to complete update May 12 17:08:33.222: INFO: Waiting for Pod statefulset-7460/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 12 17:08:33.222: INFO: Waiting for Pod statefulset-7460/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 12 17:08:43.488: INFO: Waiting for StatefulSet statefulset-7460/ss2 to complete update May 12 17:08:43.488: INFO: Waiting for Pod statefulset-7460/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 12 17:08:53.954: INFO: Waiting for StatefulSet statefulset-7460/ss2 to complete update May 12 17:08:53.954: INFO: Waiting for Pod statefulset-7460/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 May 12 17:09:03.603: INFO: Waiting for StatefulSet statefulset-7460/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 12 17:09:13.404: INFO: Deleting all statefulset in ns statefulset-7460 May 12 17:09:13.405: INFO: Scaling statefulset ss2 to 0 May 12 17:09:43.547: INFO: Waiting for statefulset status.replicas updated to 0 May 12 17:09:43.552: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:09:44.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7460" for this suite. • [SLOW TEST:184.966 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":147,"skipped":2519,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:09:44.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 12 17:09:46.549: INFO: Waiting up to 5m0s for pod "pod-24789a12-0e0a-4b6e-a034-2acaebbbf2c8" in namespace "emptydir-223" to be "success or failure" May 12 17:09:47.304: INFO: Pod "pod-24789a12-0e0a-4b6e-a034-2acaebbbf2c8": Phase="Pending", Reason="", readiness=false. Elapsed: 754.718295ms May 12 17:09:49.517: INFO: Pod "pod-24789a12-0e0a-4b6e-a034-2acaebbbf2c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.968120257s May 12 17:09:52.367: INFO: Pod "pod-24789a12-0e0a-4b6e-a034-2acaebbbf2c8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.817294829s May 12 17:09:55.615: INFO: Pod "pod-24789a12-0e0a-4b6e-a034-2acaebbbf2c8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.065340666s May 12 17:09:57.950: INFO: Pod "pod-24789a12-0e0a-4b6e-a034-2acaebbbf2c8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.401136537s May 12 17:09:59.982: INFO: Pod "pod-24789a12-0e0a-4b6e-a034-2acaebbbf2c8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.433099049s May 12 17:10:02.487: INFO: Pod "pod-24789a12-0e0a-4b6e-a034-2acaebbbf2c8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.937300912s May 12 17:10:05.170: INFO: Pod "pod-24789a12-0e0a-4b6e-a034-2acaebbbf2c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.620493481s STEP: Saw pod success May 12 17:10:05.170: INFO: Pod "pod-24789a12-0e0a-4b6e-a034-2acaebbbf2c8" satisfied condition "success or failure" May 12 17:10:05.172: INFO: Trying to get logs from node jerma-worker2 pod pod-24789a12-0e0a-4b6e-a034-2acaebbbf2c8 container test-container: STEP: delete the pod May 12 17:10:09.367: INFO: Waiting for pod pod-24789a12-0e0a-4b6e-a034-2acaebbbf2c8 to disappear May 12 17:10:09.374: INFO: Pod pod-24789a12-0e0a-4b6e-a034-2acaebbbf2c8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:10:09.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-223" for this suite. • [SLOW TEST:26.878 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2532,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:10:11.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 12 17:10:18.222: INFO: Waiting up to 5m0s for pod "downward-api-6b8f8c80-2385-4bf7-a719-40bec5fa6912" in namespace "downward-api-5954" to be "success or failure" May 12 17:10:19.416: INFO: Pod "downward-api-6b8f8c80-2385-4bf7-a719-40bec5fa6912": Phase="Pending", Reason="", readiness=false. Elapsed: 1.194007377s May 12 17:10:22.762: INFO: Pod "downward-api-6b8f8c80-2385-4bf7-a719-40bec5fa6912": Phase="Pending", Reason="", readiness=false. Elapsed: 4.539956627s May 12 17:10:25.255: INFO: Pod "downward-api-6b8f8c80-2385-4bf7-a719-40bec5fa6912": Phase="Pending", Reason="", readiness=false. Elapsed: 7.032683818s May 12 17:10:28.032: INFO: Pod "downward-api-6b8f8c80-2385-4bf7-a719-40bec5fa6912": Phase="Pending", Reason="", readiness=false. Elapsed: 9.809784739s May 12 17:10:30.253: INFO: Pod "downward-api-6b8f8c80-2385-4bf7-a719-40bec5fa6912": Phase="Pending", Reason="", readiness=false. Elapsed: 12.030927912s May 12 17:10:33.164: INFO: Pod "downward-api-6b8f8c80-2385-4bf7-a719-40bec5fa6912": Phase="Pending", Reason="", readiness=false. Elapsed: 14.942277525s May 12 17:10:35.578: INFO: Pod "downward-api-6b8f8c80-2385-4bf7-a719-40bec5fa6912": Phase="Running", Reason="", readiness=true. Elapsed: 17.355416591s May 12 17:10:37.950: INFO: Pod "downward-api-6b8f8c80-2385-4bf7-a719-40bec5fa6912": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.727488793s STEP: Saw pod success May 12 17:10:37.950: INFO: Pod "downward-api-6b8f8c80-2385-4bf7-a719-40bec5fa6912" satisfied condition "success or failure" May 12 17:10:37.956: INFO: Trying to get logs from node jerma-worker2 pod downward-api-6b8f8c80-2385-4bf7-a719-40bec5fa6912 container dapi-container: STEP: delete the pod May 12 17:10:39.504: INFO: Waiting for pod downward-api-6b8f8c80-2385-4bf7-a719-40bec5fa6912 to disappear May 12 17:10:39.549: INFO: Pod downward-api-6b8f8c80-2385-4bf7-a719-40bec5fa6912 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:10:39.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5954" for this suite. • [SLOW TEST:28.423 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2540,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:10:40.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:10:42.032: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:10:44.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9574" for this suite. • [SLOW TEST:5.751 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":150,"skipped":2562,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:10:45.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 12 17:10:59.391: INFO: Successfully updated pod "pod-update-ef95f1e0-83a8-4663-962e-5564d749cfea" STEP: verifying the updated pod is in kubernetes May 12 17:10:59.673: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:10:59.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4266" for this suite. • [SLOW TEST:13.774 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2584,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:10:59.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 12 17:11:00.702: INFO: Waiting up to 5m0s for pod "downward-api-ba231e97-e636-4d40-89c4-ed7f60f995ff" in namespace "downward-api-1691" to be "success or failure" May 12 17:11:00.960: INFO: Pod "downward-api-ba231e97-e636-4d40-89c4-ed7f60f995ff": Phase="Pending", Reason="", readiness=false. Elapsed: 258.304596ms May 12 17:11:03.092: INFO: Pod "downward-api-ba231e97-e636-4d40-89c4-ed7f60f995ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.390513239s May 12 17:11:05.439: INFO: Pod "downward-api-ba231e97-e636-4d40-89c4-ed7f60f995ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.736996037s May 12 17:11:07.751: INFO: Pod "downward-api-ba231e97-e636-4d40-89c4-ed7f60f995ff": Phase="Pending", Reason="", readiness=false. Elapsed: 7.04926188s May 12 17:11:10.111: INFO: Pod "downward-api-ba231e97-e636-4d40-89c4-ed7f60f995ff": Phase="Pending", Reason="", readiness=false. Elapsed: 9.408952853s May 12 17:11:12.960: INFO: Pod "downward-api-ba231e97-e636-4d40-89c4-ed7f60f995ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.258143939s STEP: Saw pod success May 12 17:11:12.960: INFO: Pod "downward-api-ba231e97-e636-4d40-89c4-ed7f60f995ff" satisfied condition "success or failure" May 12 17:11:13.746: INFO: Trying to get logs from node jerma-worker pod downward-api-ba231e97-e636-4d40-89c4-ed7f60f995ff container dapi-container: STEP: delete the pod May 12 17:11:15.186: INFO: Waiting for pod downward-api-ba231e97-e636-4d40-89c4-ed7f60f995ff to disappear May 12 17:11:15.214: INFO: Pod downward-api-ba231e97-e636-4d40-89c4-ed7f60f995ff no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:11:15.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1691" for this suite. • [SLOW TEST:15.511 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2590,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:11:15.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-9957 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-9957 I0512 17:11:20.039240 7 runners.go:189] Created replication controller with name: externalname-service, namespace: services-9957, replica count: 2 I0512 17:11:23.089924 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:11:26.090098 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:11:29.090350 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:11:32.090544 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:11:35.090741 7 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 17:11:35.090: INFO: Creating new exec pod May 12 17:11:49.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9957 execpodcbw6b -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 12 17:11:57.751: INFO: stderr: "I0512 17:11:57.672586 2994 log.go:172] (0xc0005e1290) (0xc0005fdea0) Create stream\nI0512 17:11:57.672614 2994 log.go:172] (0xc0005e1290) (0xc0005fdea0) Stream added, broadcasting: 1\nI0512 17:11:57.674796 2994 log.go:172] (0xc0005e1290) Reply frame received for 1\nI0512 17:11:57.674842 2994 log.go:172] (0xc0005e1290) (0xc000568640) Create stream\nI0512 17:11:57.674854 2994 log.go:172] (0xc0005e1290) (0xc000568640) Stream added, broadcasting: 3\nI0512 17:11:57.675552 2994 log.go:172] (0xc0005e1290) Reply frame received for 3\nI0512 17:11:57.675584 2994 log.go:172] (0xc0005e1290) (0xc000765400) Create stream\nI0512 17:11:57.675594 2994 log.go:172] (0xc0005e1290) (0xc000765400) Stream added, broadcasting: 5\nI0512 17:11:57.676369 2994 log.go:172] (0xc0005e1290) Reply frame received for 5\nI0512 17:11:57.746228 2994 log.go:172] (0xc0005e1290) Data frame received for 3\nI0512 17:11:57.746266 2994 log.go:172] (0xc000568640) (3) Data frame handling\nI0512 17:11:57.746288 2994 log.go:172] (0xc0005e1290) Data frame received for 5\nI0512 17:11:57.746303 2994 log.go:172] (0xc000765400) (5) Data frame handling\nI0512 17:11:57.746316 2994 log.go:172] (0xc000765400) (5) Data frame sent\nI0512 17:11:57.746328 2994 log.go:172] (0xc0005e1290) Data frame received for 5\nI0512 17:11:57.746349 2994 log.go:172] (0xc000765400) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0512 17:11:57.747391 2994 log.go:172] (0xc0005e1290) Data frame received for 1\nI0512 17:11:57.747406 2994 log.go:172] (0xc0005fdea0) (1) Data frame handling\nI0512 17:11:57.747416 2994 log.go:172] (0xc0005fdea0) (1) Data frame sent\nI0512 17:11:57.747423 2994 log.go:172] (0xc0005e1290) (0xc0005fdea0) Stream removed, broadcasting: 1\nI0512 17:11:57.747698 2994 log.go:172] (0xc0005e1290) (0xc0005fdea0) Stream removed, broadcasting: 1\nI0512 17:11:57.747714 2994 log.go:172] (0xc0005e1290) (0xc000568640) Stream removed, broadcasting: 3\nI0512 17:11:57.747723 2994 log.go:172] (0xc0005e1290) (0xc000765400) Stream removed, broadcasting: 5\n" May 12 17:11:57.751: INFO: stdout: "" May 12 17:11:57.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9957 execpodcbw6b -- /bin/sh -x -c nc -zv -t -w 2 10.103.224.207 80' May 12 17:11:57.959: INFO: stderr: "I0512 17:11:57.879199 3024 log.go:172] (0xc00057a210) (0xc0004f3540) Create stream\nI0512 17:11:57.879248 3024 log.go:172] (0xc00057a210) (0xc0004f3540) Stream added, broadcasting: 1\nI0512 17:11:57.882492 3024 log.go:172] (0xc00057a210) Reply frame received for 1\nI0512 17:11:57.882539 3024 log.go:172] (0xc00057a210) (0xc0006a1ae0) Create stream\nI0512 17:11:57.882551 3024 log.go:172] (0xc00057a210) (0xc0006a1ae0) Stream added, broadcasting: 3\nI0512 17:11:57.883281 3024 log.go:172] (0xc00057a210) Reply frame received for 3\nI0512 17:11:57.883310 3024 log.go:172] (0xc00057a210) (0xc0009aa000) Create stream\nI0512 17:11:57.883319 3024 log.go:172] (0xc00057a210) (0xc0009aa000) Stream added, broadcasting: 5\nI0512 17:11:57.884063 3024 log.go:172] (0xc00057a210) Reply frame received for 5\nI0512 17:11:57.955561 3024 log.go:172] (0xc00057a210) Data frame received for 3\nI0512 17:11:57.955583 3024 log.go:172] (0xc0006a1ae0) (3) Data frame handling\nI0512 17:11:57.955596 3024 log.go:172] (0xc00057a210) Data frame received for 5\nI0512 17:11:57.955600 3024 log.go:172] (0xc0009aa000) (5) Data frame handling\nI0512 17:11:57.955606 3024 log.go:172] (0xc0009aa000) (5) Data frame sent\nI0512 17:11:57.955614 3024 log.go:172] (0xc00057a210) Data frame received for 5\nI0512 17:11:57.955617 3024 log.go:172] (0xc0009aa000) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.224.207 80\nConnection to 10.103.224.207 80 port [tcp/http] succeeded!\nI0512 17:11:57.956493 3024 log.go:172] (0xc00057a210) Data frame received for 1\nI0512 17:11:57.956503 3024 log.go:172] (0xc0004f3540) (1) Data frame handling\nI0512 17:11:57.956509 3024 log.go:172] (0xc0004f3540) (1) Data frame sent\nI0512 17:11:57.956517 3024 log.go:172] (0xc00057a210) (0xc0004f3540) Stream removed, broadcasting: 1\nI0512 17:11:57.956527 3024 log.go:172] (0xc00057a210) Go away received\nI0512 17:11:57.956782 3024 log.go:172] (0xc00057a210) (0xc0004f3540) Stream removed, broadcasting: 1\nI0512 17:11:57.956793 3024 log.go:172] (0xc00057a210) (0xc0006a1ae0) Stream removed, broadcasting: 3\nI0512 17:11:57.956803 3024 log.go:172] (0xc00057a210) (0xc0009aa000) Stream removed, broadcasting: 5\n" May 12 17:11:57.959: INFO: stdout: "" May 12 17:11:57.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9957 execpodcbw6b -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30374' May 12 17:11:58.148: INFO: stderr: "I0512 17:11:58.076940 3044 log.go:172] (0xc00095c0b0) (0xc00077eaa0) Create stream\nI0512 17:11:58.076987 3044 log.go:172] (0xc00095c0b0) (0xc00077eaa0) Stream added, broadcasting: 1\nI0512 17:11:58.079638 3044 log.go:172] (0xc00095c0b0) Reply frame received for 1\nI0512 17:11:58.079674 3044 log.go:172] (0xc00095c0b0) (0xc0008ec000) Create stream\nI0512 17:11:58.079692 3044 log.go:172] (0xc00095c0b0) (0xc0008ec000) Stream added, broadcasting: 3\nI0512 17:11:58.080443 3044 log.go:172] (0xc00095c0b0) Reply frame received for 3\nI0512 17:11:58.080472 3044 log.go:172] (0xc00095c0b0) (0xc0008f2000) Create stream\nI0512 17:11:58.080488 3044 log.go:172] (0xc00095c0b0) (0xc0008f2000) Stream added, broadcasting: 5\nI0512 17:11:58.081489 3044 log.go:172] (0xc00095c0b0) Reply frame received for 5\nI0512 17:11:58.140664 3044 log.go:172] (0xc00095c0b0) Data frame received for 5\nI0512 17:11:58.140712 3044 log.go:172] (0xc00095c0b0) Data frame received for 3\nI0512 17:11:58.140757 3044 log.go:172] (0xc0008ec000) (3) Data frame handling\nI0512 17:11:58.140788 3044 log.go:172] (0xc0008f2000) (5) Data frame handling\nI0512 17:11:58.140821 3044 log.go:172] (0xc0008f2000) (5) Data frame sent\nI0512 17:11:58.140847 3044 log.go:172] (0xc00095c0b0) Data frame received for 5\nI0512 17:11:58.140867 3044 log.go:172] (0xc0008f2000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 30374\nConnection to 172.17.0.10 30374 port [tcp/30374] succeeded!\nI0512 17:11:58.142679 3044 log.go:172] (0xc00095c0b0) Data frame received for 1\nI0512 17:11:58.142718 3044 log.go:172] (0xc00077eaa0) (1) Data frame handling\nI0512 17:11:58.142771 3044 log.go:172] (0xc00077eaa0) (1) Data frame sent\nI0512 17:11:58.143217 3044 log.go:172] (0xc00095c0b0) (0xc00077eaa0) Stream removed, broadcasting: 1\nI0512 17:11:58.143260 3044 log.go:172] (0xc00095c0b0) Go away received\nI0512 17:11:58.143696 3044 log.go:172] (0xc00095c0b0) (0xc00077eaa0) Stream removed, broadcasting: 1\nI0512 17:11:58.143713 3044 log.go:172] (0xc00095c0b0) (0xc0008ec000) Stream removed, broadcasting: 3\nI0512 17:11:58.143721 3044 log.go:172] (0xc00095c0b0) (0xc0008f2000) Stream removed, broadcasting: 5\n" May 12 17:11:58.149: INFO: stdout: "" May 12 17:11:58.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9957 execpodcbw6b -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30374' May 12 17:11:58.352: INFO: stderr: "I0512 17:11:58.286763 3063 log.go:172] (0xc0009aa0b0) (0xc000978140) Create stream\nI0512 17:11:58.286820 3063 log.go:172] (0xc0009aa0b0) (0xc000978140) Stream added, broadcasting: 1\nI0512 17:11:58.289782 3063 log.go:172] (0xc0009aa0b0) Reply frame received for 1\nI0512 17:11:58.289844 3063 log.go:172] (0xc0009aa0b0) (0xc0009781e0) Create stream\nI0512 17:11:58.289878 3063 log.go:172] (0xc0009aa0b0) (0xc0009781e0) Stream added, broadcasting: 3\nI0512 17:11:58.290692 3063 log.go:172] (0xc0009aa0b0) Reply frame received for 3\nI0512 17:11:58.290860 3063 log.go:172] (0xc0009aa0b0) (0xc0009980a0) Create stream\nI0512 17:11:58.290869 3063 log.go:172] (0xc0009aa0b0) (0xc0009980a0) Stream added, broadcasting: 5\nI0512 17:11:58.291467 3063 log.go:172] (0xc0009aa0b0) Reply frame received for 5\nI0512 17:11:58.346666 3063 log.go:172] (0xc0009aa0b0) Data frame received for 3\nI0512 17:11:58.346689 3063 log.go:172] (0xc0009781e0) (3) Data frame handling\nI0512 17:11:58.346754 3063 log.go:172] (0xc0009aa0b0) Data frame received for 5\nI0512 17:11:58.346783 3063 log.go:172] (0xc0009980a0) (5) Data frame handling\nI0512 17:11:58.346804 3063 log.go:172] (0xc0009980a0) (5) Data frame sent\nI0512 17:11:58.346821 3063 log.go:172] (0xc0009aa0b0) Data frame received for 5\nI0512 17:11:58.346830 3063 log.go:172] (0xc0009980a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30374\nConnection to 172.17.0.8 30374 port [tcp/30374] succeeded!\nI0512 17:11:58.348013 3063 log.go:172] (0xc0009aa0b0) Data frame received for 1\nI0512 17:11:58.348035 3063 log.go:172] (0xc000978140) (1) Data frame handling\nI0512 17:11:58.348056 3063 log.go:172] (0xc000978140) (1) Data frame sent\nI0512 17:11:58.348134 3063 log.go:172] (0xc0009aa0b0) (0xc000978140) Stream removed, broadcasting: 1\nI0512 17:11:58.348301 3063 log.go:172] (0xc0009aa0b0) Go away received\nI0512 17:11:58.348442 3063 log.go:172] (0xc0009aa0b0) (0xc000978140) Stream removed, broadcasting: 1\nI0512 17:11:58.348458 3063 log.go:172] (0xc0009aa0b0) (0xc0009781e0) Stream removed, broadcasting: 3\nI0512 17:11:58.348469 3063 log.go:172] (0xc0009aa0b0) (0xc0009980a0) Stream removed, broadcasting: 5\n" May 12 17:11:58.352: INFO: stdout: "" May 12 17:11:58.352: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:11:58.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9957" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:43.524 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":153,"skipped":2609,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:11:58.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-20325539-1b1b-4757-8067-073b8d419350 in namespace container-probe-8805 May 12 17:12:04.991: INFO: Started pod test-webserver-20325539-1b1b-4757-8067-073b8d419350 in namespace container-probe-8805 STEP: checking the pod's current state and verifying that restartCount is present May 12 17:12:05.040: INFO: Initial restart count of pod test-webserver-20325539-1b1b-4757-8067-073b8d419350 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:16:06.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8805" for this suite. • [SLOW TEST:247.795 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2641,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:16:06.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-58c33ca0-a46e-409e-b16d-de49763a2a56 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:16:06.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1303" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":155,"skipped":2666,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:16:07.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:16:07.444: INFO: Waiting up to 5m0s for pod "busybox-user-65534-ad433882-1943-4144-80c2-0c942f794501" in namespace "security-context-test-2496" to be "success or failure" May 12 17:16:07.478: INFO: Pod "busybox-user-65534-ad433882-1943-4144-80c2-0c942f794501": Phase="Pending", Reason="", readiness=false. Elapsed: 33.981572ms May 12 17:16:09.629: INFO: Pod "busybox-user-65534-ad433882-1943-4144-80c2-0c942f794501": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18560084s May 12 17:16:11.755: INFO: Pod "busybox-user-65534-ad433882-1943-4144-80c2-0c942f794501": Phase="Pending", Reason="", readiness=false. Elapsed: 4.311668563s May 12 17:16:13.759: INFO: Pod "busybox-user-65534-ad433882-1943-4144-80c2-0c942f794501": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.314892414s May 12 17:16:13.759: INFO: Pod "busybox-user-65534-ad433882-1943-4144-80c2-0c942f794501" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:16:13.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-2496" for this suite. • [SLOW TEST:6.670 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2699,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:16:13.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin May 12 17:16:13.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3807 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 12 17:16:16.925: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0512 17:16:16.800304 3083 log.go:172] (0xc000677340) (0xc000a6c320) Create stream\nI0512 17:16:16.800363 3083 log.go:172] (0xc000677340) (0xc000a6c320) Stream added, broadcasting: 1\nI0512 17:16:16.802725 3083 log.go:172] (0xc000677340) Reply frame received for 1\nI0512 17:16:16.802786 3083 log.go:172] (0xc000677340) (0xc000a6c3c0) Create stream\nI0512 17:16:16.802808 3083 log.go:172] (0xc000677340) (0xc000a6c3c0) Stream added, broadcasting: 3\nI0512 17:16:16.803672 3083 log.go:172] (0xc000677340) Reply frame received for 3\nI0512 17:16:16.803719 3083 log.go:172] (0xc000677340) (0xc0005bb9a0) Create stream\nI0512 17:16:16.803740 3083 log.go:172] (0xc000677340) (0xc0005bb9a0) Stream added, broadcasting: 5\nI0512 17:16:16.804667 3083 log.go:172] (0xc000677340) Reply frame received for 5\nI0512 17:16:16.804695 3083 log.go:172] (0xc000677340) (0xc000294000) Create stream\nI0512 17:16:16.804703 3083 log.go:172] (0xc000677340) (0xc000294000) Stream added, broadcasting: 7\nI0512 17:16:16.805665 3083 log.go:172] (0xc000677340) Reply frame received for 7\nI0512 17:16:16.805855 3083 log.go:172] (0xc000a6c3c0) (3) Writing data frame\nI0512 17:16:16.806032 3083 log.go:172] (0xc000a6c3c0) (3) Writing data frame\nI0512 17:16:16.806631 3083 log.go:172] (0xc000677340) Data frame received for 5\nI0512 17:16:16.806642 3083 log.go:172] (0xc0005bb9a0) (5) Data frame handling\nI0512 17:16:16.806649 3083 log.go:172] (0xc0005bb9a0) (5) Data frame sent\nI0512 17:16:16.807406 3083 log.go:172] (0xc000677340) Data frame received for 5\nI0512 17:16:16.807417 3083 log.go:172] (0xc0005bb9a0) (5) Data frame handling\nI0512 17:16:16.807424 3083 log.go:172] (0xc0005bb9a0) (5) Data frame sent\nI0512 17:16:16.839577 3083 log.go:172] (0xc000677340) Data frame received for 1\nI0512 17:16:16.839610 3083 log.go:172] (0xc000a6c320) (1) Data frame handling\nI0512 17:16:16.839622 3083 log.go:172] (0xc000a6c320) (1) Data frame sent\nI0512 17:16:16.839695 3083 log.go:172] (0xc000677340) Data frame received for 7\nI0512 17:16:16.839716 3083 log.go:172] (0xc000294000) (7) Data frame handling\nI0512 17:16:16.839732 3083 log.go:172] (0xc000677340) Data frame received for 5\nI0512 17:16:16.839738 3083 log.go:172] (0xc0005bb9a0) (5) Data frame handling\nI0512 17:16:16.839754 3083 log.go:172] (0xc000677340) (0xc000a6c320) Stream removed, broadcasting: 1\nI0512 17:16:16.839841 3083 log.go:172] (0xc000677340) (0xc000a6c3c0) Stream removed, broadcasting: 3\nI0512 17:16:16.839906 3083 log.go:172] (0xc000677340) Go away received\nI0512 17:16:16.840046 3083 log.go:172] (0xc000677340) (0xc000a6c320) Stream removed, broadcasting: 1\nI0512 17:16:16.840105 3083 log.go:172] (0xc000677340) (0xc000a6c3c0) Stream removed, broadcasting: 3\nI0512 17:16:16.840139 3083 log.go:172] (0xc000677340) (0xc0005bb9a0) Stream removed, broadcasting: 5\nI0512 17:16:16.840246 3083 log.go:172] (0xc000677340) (0xc000294000) Stream removed, broadcasting: 7\n" May 12 17:16:16.925: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:16:18.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3807" for this suite. • [SLOW TEST:5.234 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":157,"skipped":2701,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:16:19.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 12 17:16:19.104: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c6a57fa-762b-41d3-90bf-627ddee65493" in namespace "projected-9264" to be "success or failure" May 12 17:16:19.109: INFO: Pod "downwardapi-volume-1c6a57fa-762b-41d3-90bf-627ddee65493": Phase="Pending", Reason="", readiness=false. Elapsed: 5.481499ms May 12 17:16:21.113: INFO: Pod "downwardapi-volume-1c6a57fa-762b-41d3-90bf-627ddee65493": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009583901s May 12 17:16:23.117: INFO: Pod "downwardapi-volume-1c6a57fa-762b-41d3-90bf-627ddee65493": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013433159s May 12 17:16:25.120: INFO: Pod "downwardapi-volume-1c6a57fa-762b-41d3-90bf-627ddee65493": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016540578s STEP: Saw pod success May 12 17:16:25.120: INFO: Pod "downwardapi-volume-1c6a57fa-762b-41d3-90bf-627ddee65493" satisfied condition "success or failure" May 12 17:16:25.124: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1c6a57fa-762b-41d3-90bf-627ddee65493 container client-container: STEP: delete the pod May 12 17:16:25.152: INFO: Waiting for pod downwardapi-volume-1c6a57fa-762b-41d3-90bf-627ddee65493 to disappear May 12 17:16:25.156: INFO: Pod downwardapi-volume-1c6a57fa-762b-41d3-90bf-627ddee65493 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:16:25.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9264" for this suite. • [SLOW TEST:6.163 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2701,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:16:25.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:16:32.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5224" for this suite. • [SLOW TEST:7.347 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":159,"skipped":2705,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:16:32.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0512 17:16:42.969979 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 17:16:42.970: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:16:42.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1265" for this suite. • [SLOW TEST:10.467 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":160,"skipped":2714,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:16:42.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-01b657d5-2200-4fe9-bdb4-aba46fcec95b STEP: Creating a pod to test consume configMaps May 12 17:16:43.057: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-69720882-3acf-4e9b-8c0d-f16fcb54dbb0" in namespace "projected-1453" to be "success or failure" May 12 17:16:43.060: INFO: Pod "pod-projected-configmaps-69720882-3acf-4e9b-8c0d-f16fcb54dbb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.756099ms May 12 17:16:45.063: INFO: Pod "pod-projected-configmaps-69720882-3acf-4e9b-8c0d-f16fcb54dbb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005774974s May 12 17:16:47.066: INFO: Pod "pod-projected-configmaps-69720882-3acf-4e9b-8c0d-f16fcb54dbb0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009181809s May 12 17:16:49.069: INFO: Pod "pod-projected-configmaps-69720882-3acf-4e9b-8c0d-f16fcb54dbb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.012334298s STEP: Saw pod success May 12 17:16:49.069: INFO: Pod "pod-projected-configmaps-69720882-3acf-4e9b-8c0d-f16fcb54dbb0" satisfied condition "success or failure" May 12 17:16:49.072: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-69720882-3acf-4e9b-8c0d-f16fcb54dbb0 container projected-configmap-volume-test: STEP: delete the pod May 12 17:16:49.102: INFO: Waiting for pod pod-projected-configmaps-69720882-3acf-4e9b-8c0d-f16fcb54dbb0 to disappear May 12 17:16:49.115: INFO: Pod pod-projected-configmaps-69720882-3acf-4e9b-8c0d-f16fcb54dbb0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:16:49.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1453" for this suite. • [SLOW TEST:6.143 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2730,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:16:49.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:16:49.201: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-60835cf6-2ec4-485e-8349-d981ad23f218" in namespace "security-context-test-5939" to be "success or failure" May 12 17:16:49.205: INFO: Pod "busybox-privileged-false-60835cf6-2ec4-485e-8349-d981ad23f218": Phase="Pending", Reason="", readiness=false. Elapsed: 4.171967ms May 12 17:16:51.210: INFO: Pod "busybox-privileged-false-60835cf6-2ec4-485e-8349-d981ad23f218": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008775246s May 12 17:16:53.213: INFO: Pod "busybox-privileged-false-60835cf6-2ec4-485e-8349-d981ad23f218": Phase="Running", Reason="", readiness=true. Elapsed: 4.011773759s May 12 17:16:55.216: INFO: Pod "busybox-privileged-false-60835cf6-2ec4-485e-8349-d981ad23f218": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015028506s May 12 17:16:55.216: INFO: Pod "busybox-privileged-false-60835cf6-2ec4-485e-8349-d981ad23f218" satisfied condition "success or failure" May 12 17:16:55.232: INFO: Got logs for pod "busybox-privileged-false-60835cf6-2ec4-485e-8349-d981ad23f218": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:16:55.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5939" for this suite. • [SLOW TEST:6.116 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2731,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:16:55.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 12 17:16:55.452: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:17:09.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2794" for this suite. • [SLOW TEST:14.656 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2736,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:17:09.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:17:10.231: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 12 17:17:13.324: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4138 create -f -' May 12 17:17:23.129: INFO: stderr: "" May 12 17:17:23.129: INFO: stdout: "e2e-test-crd-publish-openapi-9936-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 12 17:17:23.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4138 delete e2e-test-crd-publish-openapi-9936-crds test-foo' May 12 17:17:23.280: INFO: stderr: "" May 12 17:17:23.280: INFO: stdout: "e2e-test-crd-publish-openapi-9936-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 12 17:17:23.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4138 apply -f -' May 12 17:17:23.605: INFO: stderr: "" May 12 17:17:23.605: INFO: stdout: "e2e-test-crd-publish-openapi-9936-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 12 17:17:23.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4138 delete e2e-test-crd-publish-openapi-9936-crds test-foo' May 12 17:17:24.276: INFO: stderr: "" May 12 17:17:24.276: INFO: stdout: "e2e-test-crd-publish-openapi-9936-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 12 17:17:24.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4138 create -f -' May 12 17:17:25.054: INFO: rc: 1 May 12 17:17:25.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4138 apply -f -' May 12 17:17:25.630: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 12 17:17:25.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4138 create -f -' May 12 17:17:26.005: INFO: rc: 1 May 12 17:17:26.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4138 apply -f -' May 12 17:17:26.573: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 12 17:17:26.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9936-crds' May 12 17:17:27.592: INFO: stderr: "" May 12 17:17:27.592: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9936-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 12 17:17:27.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9936-crds.metadata' May 12 17:17:27.969: INFO: stderr: "" May 12 17:17:27.969: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9936-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 12 17:17:27.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9936-crds.spec' May 12 17:17:28.235: INFO: stderr: "" May 12 17:17:28.235: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9936-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 12 17:17:28.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9936-crds.spec.bars' May 12 17:17:28.504: INFO: stderr: "" May 12 17:17:28.504: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9936-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 12 17:17:28.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9936-crds.spec.bars2' May 12 17:17:28.783: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:17:31.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4138" for this suite. • [SLOW TEST:22.197 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":164,"skipped":2773,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:17:32.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 12 17:17:32.237: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38cc16dd-7364-40ca-ba24-a35f67596968" in namespace "downward-api-4505" to be "success or failure" May 12 17:17:32.254: INFO: Pod "downwardapi-volume-38cc16dd-7364-40ca-ba24-a35f67596968": Phase="Pending", Reason="", readiness=false. Elapsed: 17.194999ms May 12 17:17:34.368: INFO: Pod "downwardapi-volume-38cc16dd-7364-40ca-ba24-a35f67596968": Phase="Pending", Reason="", readiness=false. Elapsed: 2.130782877s May 12 17:17:36.371: INFO: Pod "downwardapi-volume-38cc16dd-7364-40ca-ba24-a35f67596968": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134050907s May 12 17:17:38.375: INFO: Pod "downwardapi-volume-38cc16dd-7364-40ca-ba24-a35f67596968": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.137831864s STEP: Saw pod success May 12 17:17:38.375: INFO: Pod "downwardapi-volume-38cc16dd-7364-40ca-ba24-a35f67596968" satisfied condition "success or failure" May 12 17:17:38.378: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-38cc16dd-7364-40ca-ba24-a35f67596968 container client-container: STEP: delete the pod May 12 17:17:38.482: INFO: Waiting for pod downwardapi-volume-38cc16dd-7364-40ca-ba24-a35f67596968 to disappear May 12 17:17:38.536: INFO: Pod downwardapi-volume-38cc16dd-7364-40ca-ba24-a35f67596968 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:17:38.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4505" for this suite. • [SLOW TEST:6.458 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2784,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:17:38.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 17:17:39.664: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 17:17:41.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900659, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900659, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900660, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900659, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:17:44.081: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900659, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900659, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900660, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900659, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:17:45.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900659, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900659, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900660, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900659, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 17:17:48.756: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:17:48.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8827-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:17:50.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8381" for this suite. STEP: Destroying namespace "webhook-8381-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:13.477 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":166,"skipped":2799,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:17:52.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-3b13d528-4f8d-408c-9688-8026919d237b STEP: Creating a pod to test consume configMaps May 12 17:17:52.720: INFO: Waiting up to 5m0s for pod "pod-configmaps-2d9c62ed-ecee-4a39-8971-8e88b7739ecd" in namespace "configmap-1546" to be "success or failure" May 12 17:17:52.886: INFO: Pod "pod-configmaps-2d9c62ed-ecee-4a39-8971-8e88b7739ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 165.568948ms May 12 17:17:55.091: INFO: Pod "pod-configmaps-2d9c62ed-ecee-4a39-8971-8e88b7739ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.37086778s May 12 17:17:57.247: INFO: Pod "pod-configmaps-2d9c62ed-ecee-4a39-8971-8e88b7739ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.527125549s May 12 17:17:59.306: INFO: Pod "pod-configmaps-2d9c62ed-ecee-4a39-8971-8e88b7739ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.586053082s May 12 17:18:01.542: INFO: Pod "pod-configmaps-2d9c62ed-ecee-4a39-8971-8e88b7739ecd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.821399357s STEP: Saw pod success May 12 17:18:01.542: INFO: Pod "pod-configmaps-2d9c62ed-ecee-4a39-8971-8e88b7739ecd" satisfied condition "success or failure" May 12 17:18:01.544: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-2d9c62ed-ecee-4a39-8971-8e88b7739ecd container configmap-volume-test: STEP: delete the pod May 12 17:18:02.129: INFO: Waiting for pod pod-configmaps-2d9c62ed-ecee-4a39-8971-8e88b7739ecd to disappear May 12 17:18:02.218: INFO: Pod pod-configmaps-2d9c62ed-ecee-4a39-8971-8e88b7739ecd no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:18:02.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1546" for this suite. • [SLOW TEST:10.197 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2822,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:18:02.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 17:18:03.470: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 17:18:05.582: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900683, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900683, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900684, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900683, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:18:07.625: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900683, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900683, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900684, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900683, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 17:18:10.640: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:18:12.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6087" for this suite. STEP: Destroying namespace "webhook-6087-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.671 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":168,"skipped":2838,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:18:12.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-07cd5908-f756-4763-9fa6-bebb2b2933f7 STEP: Creating a pod to test consume secrets May 12 17:18:12.992: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a66148d2-d800-4a34-8fce-71935ae56e5c" in namespace "projected-7860" to be "success or failure" May 12 17:18:13.015: INFO: Pod "pod-projected-secrets-a66148d2-d800-4a34-8fce-71935ae56e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.826608ms May 12 17:18:15.018: INFO: Pod "pod-projected-secrets-a66148d2-d800-4a34-8fce-71935ae56e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025982785s May 12 17:18:17.140: INFO: Pod "pod-projected-secrets-a66148d2-d800-4a34-8fce-71935ae56e5c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.147733455s May 12 17:18:19.403: INFO: Pod "pod-projected-secrets-a66148d2-d800-4a34-8fce-71935ae56e5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.410587538s STEP: Saw pod success May 12 17:18:19.403: INFO: Pod "pod-projected-secrets-a66148d2-d800-4a34-8fce-71935ae56e5c" satisfied condition "success or failure" May 12 17:18:19.661: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-a66148d2-d800-4a34-8fce-71935ae56e5c container projected-secret-volume-test: STEP: delete the pod May 12 17:18:19.727: INFO: Waiting for pod pod-projected-secrets-a66148d2-d800-4a34-8fce-71935ae56e5c to disappear May 12 17:18:19.792: INFO: Pod pod-projected-secrets-a66148d2-d800-4a34-8fce-71935ae56e5c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:18:19.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7860" for this suite. • [SLOW TEST:6.904 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2840,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:18:19.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-5ce06a79-c91d-41ff-bbc2-00eea91e1e0e STEP: Creating a pod to test consume secrets May 12 17:18:19.887: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bf26a682-b782-4fd0-bd3c-445d2aa56eec" in namespace "projected-1351" to be "success or failure" May 12 17:18:19.890: INFO: Pod "pod-projected-secrets-bf26a682-b782-4fd0-bd3c-445d2aa56eec": Phase="Pending", Reason="", readiness=false. Elapsed: 3.590504ms May 12 17:18:21.894: INFO: Pod "pod-projected-secrets-bf26a682-b782-4fd0-bd3c-445d2aa56eec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007072781s May 12 17:18:24.002: INFO: Pod "pod-projected-secrets-bf26a682-b782-4fd0-bd3c-445d2aa56eec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.114982987s STEP: Saw pod success May 12 17:18:24.002: INFO: Pod "pod-projected-secrets-bf26a682-b782-4fd0-bd3c-445d2aa56eec" satisfied condition "success or failure" May 12 17:18:24.004: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-bf26a682-b782-4fd0-bd3c-445d2aa56eec container projected-secret-volume-test: STEP: delete the pod May 12 17:18:24.159: INFO: Waiting for pod pod-projected-secrets-bf26a682-b782-4fd0-bd3c-445d2aa56eec to disappear May 12 17:18:24.202: INFO: Pod pod-projected-secrets-bf26a682-b782-4fd0-bd3c-445d2aa56eec no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:18:24.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1351" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2852,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:18:24.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:18:24.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 12 17:18:24.885: INFO: stderr: "" May 12 17:18:24.885: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:23:43Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:18:24.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3481" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":171,"skipped":2866,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:18:25.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 12 17:18:25.701: INFO: PodSpec: initContainers in spec.initContainers May 12 17:19:26.463: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-512eb4d8-ecce-4c7c-bf99-e82c55892b38", GenerateName:"", Namespace:"init-container-663", SelfLink:"/api/v1/namespaces/init-container-663/pods/pod-init-512eb4d8-ecce-4c7c-bf99-e82c55892b38", UID:"d418b34d-7f34-4c27-b65c-461041aca0f1", ResourceVersion:"15626050", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724900705, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"701592360"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-nbf2h", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc006f8d280), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nbf2h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nbf2h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-nbf2h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003b66778), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0046122a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003b66850)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003b66890)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003b66898), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003b6689c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900706, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900706, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900706, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724900705, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.40", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.40"}}, StartTime:(*v1.Time)(0xc00234f220), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00234f320), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002267f10)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://118b4afd1058aa754cd17ecc2ae0925723aef84944e05310eb14cc7164a2af98", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00234f380), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00234f2c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc003b66adf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:19:26.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-663" for this suite. • [SLOW TEST:61.504 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":172,"skipped":2872,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:19:26.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium May 12 17:19:26.963: INFO: Waiting up to 5m0s for pod "pod-962fd26e-29a7-43c8-a3c5-56f90b94ec0c" in namespace "emptydir-394" to be "success or failure" May 12 17:19:27.005: INFO: Pod "pod-962fd26e-29a7-43c8-a3c5-56f90b94ec0c": Phase="Pending", Reason="", readiness=false. Elapsed: 42.565962ms May 12 17:19:29.369: INFO: Pod "pod-962fd26e-29a7-43c8-a3c5-56f90b94ec0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.406033203s May 12 17:19:31.373: INFO: Pod "pod-962fd26e-29a7-43c8-a3c5-56f90b94ec0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.410058723s May 12 17:19:33.393: INFO: Pod "pod-962fd26e-29a7-43c8-a3c5-56f90b94ec0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.429953881s May 12 17:19:35.670: INFO: Pod "pod-962fd26e-29a7-43c8-a3c5-56f90b94ec0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.706948209s STEP: Saw pod success May 12 17:19:35.670: INFO: Pod "pod-962fd26e-29a7-43c8-a3c5-56f90b94ec0c" satisfied condition "success or failure" May 12 17:19:35.672: INFO: Trying to get logs from node jerma-worker2 pod pod-962fd26e-29a7-43c8-a3c5-56f90b94ec0c container test-container: STEP: delete the pod May 12 17:19:37.124: INFO: Waiting for pod pod-962fd26e-29a7-43c8-a3c5-56f90b94ec0c to disappear May 12 17:19:37.209: INFO: Pod pod-962fd26e-29a7-43c8-a3c5-56f90b94ec0c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:19:37.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-394" for this suite. • [SLOW TEST:10.986 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2872,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:19:37.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-583ba415-f13a-4290-a7be-b7f940cca1ac STEP: Creating a pod to test consume secrets May 12 17:19:40.762: INFO: Waiting up to 5m0s for pod "pod-secrets-422d1b37-226a-4408-8a66-d7d6588ce534" in namespace "secrets-801" to be "success or failure" May 12 17:19:41.045: INFO: Pod "pod-secrets-422d1b37-226a-4408-8a66-d7d6588ce534": Phase="Pending", Reason="", readiness=false. Elapsed: 283.242132ms May 12 17:19:43.134: INFO: Pod "pod-secrets-422d1b37-226a-4408-8a66-d7d6588ce534": Phase="Pending", Reason="", readiness=false. Elapsed: 2.372863403s May 12 17:19:45.387: INFO: Pod "pod-secrets-422d1b37-226a-4408-8a66-d7d6588ce534": Phase="Pending", Reason="", readiness=false. Elapsed: 4.625659947s May 12 17:19:47.437: INFO: Pod "pod-secrets-422d1b37-226a-4408-8a66-d7d6588ce534": Phase="Pending", Reason="", readiness=false. Elapsed: 6.675384025s May 12 17:19:49.722: INFO: Pod "pod-secrets-422d1b37-226a-4408-8a66-d7d6588ce534": Phase="Pending", Reason="", readiness=false. Elapsed: 8.960808078s May 12 17:19:52.579: INFO: Pod "pod-secrets-422d1b37-226a-4408-8a66-d7d6588ce534": Phase="Running", Reason="", readiness=true. Elapsed: 11.81718462s May 12 17:19:54.625: INFO: Pod "pod-secrets-422d1b37-226a-4408-8a66-d7d6588ce534": Phase="Succeeded", Reason="", readiness=false. Elapsed: 13.863724094s STEP: Saw pod success May 12 17:19:54.625: INFO: Pod "pod-secrets-422d1b37-226a-4408-8a66-d7d6588ce534" satisfied condition "success or failure" May 12 17:19:54.627: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-422d1b37-226a-4408-8a66-d7d6588ce534 container secret-volume-test: STEP: delete the pod May 12 17:19:55.890: INFO: Waiting for pod pod-secrets-422d1b37-226a-4408-8a66-d7d6588ce534 to disappear May 12 17:19:56.496: INFO: Pod pod-secrets-422d1b37-226a-4408-8a66-d7d6588ce534 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:19:56.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-801" for this suite. STEP: Destroying namespace "secret-namespace-7586" for this suite. • [SLOW TEST:19.523 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2878,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:19:57.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-7d6cedc3-709b-40cf-8a4c-d7dd1288d6f4 in namespace container-probe-579 May 12 17:20:06.282: INFO: Started pod busybox-7d6cedc3-709b-40cf-8a4c-d7dd1288d6f4 in namespace container-probe-579 STEP: checking the pod's current state and verifying that restartCount is present May 12 17:20:06.725: INFO: Initial restart count of pod busybox-7d6cedc3-709b-40cf-8a4c-d7dd1288d6f4 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:24:08.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-579" for this suite. • [SLOW TEST:251.336 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2896,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:24:08.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:24:09.098: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:24:16.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5125" for this suite. • [SLOW TEST:8.228 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":176,"skipped":2916,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:24:16.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9961.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-9961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-9961.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9961.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 17:24:25.110: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:25.112: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:25.114: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:25.116: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:25.123: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:25.125: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:25.127: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:25.130: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:25.174: INFO: Lookups using dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local] May 12 17:24:30.178: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:30.180: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:30.183: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:30.185: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:30.194: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:30.197: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:30.200: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:30.202: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:30.207: INFO: Lookups using dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local] May 12 17:24:35.178: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:35.181: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:35.184: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:35.187: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:35.346: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:35.349: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:35.351: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:35.353: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:35.359: INFO: Lookups using dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local] May 12 17:24:40.179: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:40.182: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:40.186: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:40.189: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:40.281: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:40.284: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:40.287: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:40.290: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:40.299: INFO: Lookups using dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local] May 12 17:24:45.178: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:45.180: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:45.183: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:45.187: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:45.220: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:45.222: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:45.224: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:45.226: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:45.230: INFO: Lookups using dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local] May 12 17:24:50.177: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:50.180: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:50.182: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:50.184: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:50.191: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:50.193: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:50.195: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:50.198: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local from pod dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7: the server could not find the requested resource (get pods dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7) May 12 17:24:50.203: INFO: Lookups using dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9961.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9961.svc.cluster.local jessie_udp@dns-test-service-2.dns-9961.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9961.svc.cluster.local] May 12 17:24:55.215: INFO: DNS probes using dns-9961/dns-test-7e3e0217-2967-4127-ac86-1f93501adbb7 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:24:55.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9961" for this suite. • [SLOW TEST:39.098 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":177,"skipped":2922,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:24:55.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 12 17:25:02.827: INFO: Successfully updated pod "labelsupdatee7779862-b456-4a0c-87fd-6725aa614a98" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:25:05.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3989" for this suite. • [SLOW TEST:9.490 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2934,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:25:05.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2330.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2330.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 17:25:19.861: INFO: DNS probes using dns-2330/dns-test-043a2795-dc39-410f-b362-6d6df63bfa18 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:25:19.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2330" for this suite. • [SLOW TEST:14.791 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":179,"skipped":2959,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:25:19.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 12 17:25:25.939: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:25:26.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4322" for this suite. • [SLOW TEST:6.165 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2971,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:25:26.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:26:16.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3370" for this suite. • [SLOW TEST:50.077 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2985,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:26:16.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-3037 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3037 to expose endpoints map[] May 12 17:26:16.866: INFO: Get endpoints failed (171.794893ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 12 17:26:17.931: INFO: successfully validated that service multi-endpoint-test in namespace services-3037 exposes endpoints map[] (1.237142227s elapsed) STEP: Creating pod pod1 in namespace services-3037 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3037 to expose endpoints map[pod1:[100]] May 12 17:26:23.257: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (5.315775496s elapsed, will retry) May 12 17:26:26.093: INFO: successfully validated that service multi-endpoint-test in namespace services-3037 exposes endpoints map[pod1:[100]] (8.151765332s elapsed) STEP: Creating pod pod2 in namespace services-3037 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3037 to expose endpoints map[pod1:[100] pod2:[101]] May 12 17:26:32.715: INFO: Unexpected endpoints: found map[09bfa143-a520-4072-905e-d41eab0dbb84:[100]], expected map[pod1:[100] pod2:[101]] (6.558348526s elapsed, will retry) May 12 17:26:35.239: INFO: successfully validated that service multi-endpoint-test in namespace services-3037 exposes endpoints map[pod1:[100] pod2:[101]] (9.081897655s elapsed) STEP: Deleting pod pod1 in namespace services-3037 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3037 to expose endpoints map[pod2:[101]] May 12 17:26:36.746: INFO: successfully validated that service multi-endpoint-test in namespace services-3037 exposes endpoints map[pod2:[101]] (1.502216453s elapsed) STEP: Deleting pod pod2 in namespace services-3037 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-3037 to expose endpoints map[] May 12 17:26:38.461: INFO: successfully validated that service multi-endpoint-test in namespace services-3037 exposes endpoints map[] (1.711039732s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:26:39.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3037" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:23.035 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":182,"skipped":2987,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:26:39.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-0111f308-6ed8-40fd-aa93-fa87c1a1a05f STEP: Creating a pod to test consume configMaps May 12 17:26:40.357: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cc67eec6-16f1-48c2-bde7-6c4554a05e12" in namespace "projected-3751" to be "success or failure" May 12 17:26:40.639: INFO: Pod "pod-projected-configmaps-cc67eec6-16f1-48c2-bde7-6c4554a05e12": Phase="Pending", Reason="", readiness=false. Elapsed: 281.616601ms May 12 17:26:43.049: INFO: Pod "pod-projected-configmaps-cc67eec6-16f1-48c2-bde7-6c4554a05e12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.692165827s May 12 17:26:45.068: INFO: Pod "pod-projected-configmaps-cc67eec6-16f1-48c2-bde7-6c4554a05e12": Phase="Pending", Reason="", readiness=false. Elapsed: 4.711003021s May 12 17:26:47.078: INFO: Pod "pod-projected-configmaps-cc67eec6-16f1-48c2-bde7-6c4554a05e12": Phase="Running", Reason="", readiness=true. Elapsed: 6.720971288s May 12 17:26:49.082: INFO: Pod "pod-projected-configmaps-cc67eec6-16f1-48c2-bde7-6c4554a05e12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.725230288s STEP: Saw pod success May 12 17:26:49.082: INFO: Pod "pod-projected-configmaps-cc67eec6-16f1-48c2-bde7-6c4554a05e12" satisfied condition "success or failure" May 12 17:26:49.086: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-cc67eec6-16f1-48c2-bde7-6c4554a05e12 container projected-configmap-volume-test: STEP: delete the pod May 12 17:26:49.142: INFO: Waiting for pod pod-projected-configmaps-cc67eec6-16f1-48c2-bde7-6c4554a05e12 to disappear May 12 17:26:49.200: INFO: Pod pod-projected-configmaps-cc67eec6-16f1-48c2-bde7-6c4554a05e12 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:26:49.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3751" for this suite. • [SLOW TEST:10.028 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":3025,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:26:49.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-8280 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-8280 STEP: creating replication controller externalsvc in namespace services-8280 I0512 17:26:49.740294 7 runners.go:189] Created replication controller with name: externalsvc, namespace: services-8280, replica count: 2 I0512 17:26:52.790618 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:26:55.790823 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:26:58.791086 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:27:01.791293 7 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 12 17:27:02.578: INFO: Creating new exec pod May 12 17:27:11.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-8280 execpodzhh9d -- /bin/sh -x -c nslookup nodeport-service' May 12 17:27:11.717: INFO: stderr: "I0512 17:27:11.620504 3416 log.go:172] (0xc000a46580) (0xc00072bb80) Create stream\nI0512 17:27:11.620561 3416 log.go:172] (0xc000a46580) (0xc00072bb80) Stream added, broadcasting: 1\nI0512 17:27:11.622969 3416 log.go:172] (0xc000a46580) Reply frame received for 1\nI0512 17:27:11.623010 3416 log.go:172] (0xc000a46580) (0xc000992000) Create stream\nI0512 17:27:11.623020 3416 log.go:172] (0xc000a46580) (0xc000992000) Stream added, broadcasting: 3\nI0512 17:27:11.623932 3416 log.go:172] (0xc000a46580) Reply frame received for 3\nI0512 17:27:11.623957 3416 log.go:172] (0xc000a46580) (0xc0009cc000) Create stream\nI0512 17:27:11.623964 3416 log.go:172] (0xc000a46580) (0xc0009cc000) Stream added, broadcasting: 5\nI0512 17:27:11.624873 3416 log.go:172] (0xc000a46580) Reply frame received for 5\nI0512 17:27:11.700350 3416 log.go:172] (0xc000a46580) Data frame received for 5\nI0512 17:27:11.700378 3416 log.go:172] (0xc0009cc000) (5) Data frame handling\nI0512 17:27:11.700399 3416 log.go:172] (0xc0009cc000) (5) Data frame sent\n+ nslookup nodeport-service\nI0512 17:27:11.709006 3416 log.go:172] (0xc000a46580) Data frame received for 3\nI0512 17:27:11.709046 3416 log.go:172] (0xc000992000) (3) Data frame handling\nI0512 17:27:11.709080 3416 log.go:172] (0xc000992000) (3) Data frame sent\nI0512 17:27:11.709770 3416 log.go:172] (0xc000a46580) Data frame received for 3\nI0512 17:27:11.709792 3416 log.go:172] (0xc000992000) (3) Data frame handling\nI0512 17:27:11.709817 3416 log.go:172] (0xc000992000) (3) Data frame sent\nI0512 17:27:11.710564 3416 log.go:172] (0xc000a46580) Data frame received for 3\nI0512 17:27:11.710589 3416 log.go:172] (0xc000992000) (3) Data frame handling\nI0512 17:27:11.710610 3416 log.go:172] (0xc000a46580) Data frame received for 5\nI0512 17:27:11.710618 3416 log.go:172] (0xc0009cc000) (5) Data frame handling\nI0512 17:27:11.712040 3416 log.go:172] (0xc000a46580) Data frame received for 1\nI0512 17:27:11.712057 3416 log.go:172] (0xc00072bb80) (1) Data frame handling\nI0512 17:27:11.712066 3416 log.go:172] (0xc00072bb80) (1) Data frame sent\nI0512 17:27:11.712075 3416 log.go:172] (0xc000a46580) (0xc00072bb80) Stream removed, broadcasting: 1\nI0512 17:27:11.712137 3416 log.go:172] (0xc000a46580) Go away received\nI0512 17:27:11.712394 3416 log.go:172] (0xc000a46580) (0xc00072bb80) Stream removed, broadcasting: 1\nI0512 17:27:11.712451 3416 log.go:172] (0xc000a46580) (0xc000992000) Stream removed, broadcasting: 3\nI0512 17:27:11.712463 3416 log.go:172] (0xc000a46580) (0xc0009cc000) Stream removed, broadcasting: 5\n" May 12 17:27:11.717: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-8280.svc.cluster.local\tcanonical name = externalsvc.services-8280.svc.cluster.local.\nName:\texternalsvc.services-8280.svc.cluster.local\nAddress: 10.102.249.131\n\n" STEP: deleting ReplicationController externalsvc in namespace services-8280, will wait for the garbage collector to delete the pods May 12 17:27:11.852: INFO: Deleting ReplicationController externalsvc took: 7.192116ms May 12 17:27:12.452: INFO: Terminating ReplicationController externalsvc pods took: 600.246871ms May 12 17:27:19.740: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:27:19.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8280" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:30.504 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":184,"skipped":3067,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:27:19.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:27:20.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-2733" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":185,"skipped":3069,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:27:20.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command May 12 17:27:20.710: INFO: Waiting up to 5m0s for pod "var-expansion-d8331bac-800d-493a-a1ba-b6e8413ff6c8" in namespace "var-expansion-4005" to be "success or failure" May 12 17:27:20.714: INFO: Pod "var-expansion-d8331bac-800d-493a-a1ba-b6e8413ff6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291295ms May 12 17:27:22.718: INFO: Pod "var-expansion-d8331bac-800d-493a-a1ba-b6e8413ff6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008170492s May 12 17:27:24.728: INFO: Pod "var-expansion-d8331bac-800d-493a-a1ba-b6e8413ff6c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017858835s May 12 17:27:26.731: INFO: Pod "var-expansion-d8331bac-800d-493a-a1ba-b6e8413ff6c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021615962s STEP: Saw pod success May 12 17:27:26.731: INFO: Pod "var-expansion-d8331bac-800d-493a-a1ba-b6e8413ff6c8" satisfied condition "success or failure" May 12 17:27:26.734: INFO: Trying to get logs from node jerma-worker pod var-expansion-d8331bac-800d-493a-a1ba-b6e8413ff6c8 container dapi-container: STEP: delete the pod May 12 17:27:27.041: INFO: Waiting for pod var-expansion-d8331bac-800d-493a-a1ba-b6e8413ff6c8 to disappear May 12 17:27:27.082: INFO: Pod var-expansion-d8331bac-800d-493a-a1ba-b6e8413ff6c8 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:27:27.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4005" for this suite. • [SLOW TEST:6.635 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3070,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:27:27.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 12 17:27:27.181: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c8000511-b06c-41ab-964b-37b4eb83d362" in namespace "projected-1588" to be "success or failure" May 12 17:27:27.191: INFO: Pod "downwardapi-volume-c8000511-b06c-41ab-964b-37b4eb83d362": Phase="Pending", Reason="", readiness=false. Elapsed: 9.75733ms May 12 17:27:29.195: INFO: Pod "downwardapi-volume-c8000511-b06c-41ab-964b-37b4eb83d362": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013429744s May 12 17:27:31.199: INFO: Pod "downwardapi-volume-c8000511-b06c-41ab-964b-37b4eb83d362": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017655782s STEP: Saw pod success May 12 17:27:31.199: INFO: Pod "downwardapi-volume-c8000511-b06c-41ab-964b-37b4eb83d362" satisfied condition "success or failure" May 12 17:27:31.201: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c8000511-b06c-41ab-964b-37b4eb83d362 container client-container: STEP: delete the pod May 12 17:27:31.298: INFO: Waiting for pod downwardapi-volume-c8000511-b06c-41ab-964b-37b4eb83d362 to disappear May 12 17:27:31.327: INFO: Pod downwardapi-volume-c8000511-b06c-41ab-964b-37b4eb83d362 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:27:31.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1588" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3095,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:27:31.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:27:31.680: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 12 17:27:36.717: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 12 17:27:36.717: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 12 17:27:38.720: INFO: Creating deployment "test-rollover-deployment" May 12 17:27:38.765: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 12 17:27:40.772: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 12 17:27:40.778: INFO: Ensure that both replica sets have 1 created replica May 12 17:27:40.784: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 12 17:27:40.789: INFO: Updating deployment test-rollover-deployment May 12 17:27:40.789: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 12 17:27:43.027: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 12 17:27:43.032: INFO: Make sure deployment "test-rollover-deployment" is complete May 12 17:27:43.037: INFO: all replica sets need to contain the pod-template-hash label May 12 17:27:43.037: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901261, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:27:45.044: INFO: all replica sets need to contain the pod-template-hash label May 12 17:27:45.044: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901261, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:27:47.046: INFO: all replica sets need to contain the pod-template-hash label May 12 17:27:47.046: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901265, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:27:49.044: INFO: all replica sets need to contain the pod-template-hash label May 12 17:27:49.044: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901265, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:27:51.044: INFO: all replica sets need to contain the pod-template-hash label May 12 17:27:51.044: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901265, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:27:53.045: INFO: all replica sets need to contain the pod-template-hash label May 12 17:27:53.045: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901265, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:27:55.043: INFO: all replica sets need to contain the pod-template-hash label May 12 17:27:55.043: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901265, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901258, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:27:57.043: INFO: May 12 17:27:57.043: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 12 17:27:57.049: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4496 /apis/apps/v1/namespaces/deployment-4496/deployments/test-rollover-deployment 6c852f31-05e8-4d92-b221-7808e36a147d 15628100 2 2020-05-12 17:27:38 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0011090e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-12 17:27:38 +0000 UTC,LastTransitionTime:2020-05-12 17:27:38 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-05-12 17:27:56 +0000 UTC,LastTransitionTime:2020-05-12 17:27:38 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 12 17:27:57.051: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-4496 /apis/apps/v1/namespaces/deployment-4496/replicasets/test-rollover-deployment-574d6dfbff 50ca3556-0b0f-4b09-a150-5891c8aa91c7 15628089 2 2020-05-12 17:27:40 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 6c852f31-05e8-4d92-b221-7808e36a147d 0xc001109ab7 0xc001109ab8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001109db8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 12 17:27:57.051: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 12 17:27:57.051: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4496 /apis/apps/v1/namespaces/deployment-4496/replicasets/test-rollover-controller f338b576-3892-4b46-9354-76c5ad69ab8d 15628098 2 2020-05-12 17:27:31 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 6c852f31-05e8-4d92-b221-7808e36a147d 0xc0011099d7 0xc0011099d8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001109a38 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 12 17:27:57.051: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-4496 /apis/apps/v1/namespaces/deployment-4496/replicasets/test-rollover-deployment-f6c94f66c c6d85fd4-a56c-4329-b086-a18dc568a064 15628040 2 2020-05-12 17:27:38 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 6c852f31-05e8-4d92-b221-7808e36a147d 0xc001109f70 0xc001109f71}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001109fe8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 12 17:27:57.053: INFO: Pod "test-rollover-deployment-574d6dfbff-sbxrr" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-sbxrr test-rollover-deployment-574d6dfbff- deployment-4496 /api/v1/namespaces/deployment-4496/pods/test-rollover-deployment-574d6dfbff-sbxrr 753d6a99-3582-44b3-bad3-1b337acf086c 15628057 0 2020-05-12 17:27:41 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 50ca3556-0b0f-4b09-a150-5891c8aa91c7 0xc0036d5107 0xc0036d5108}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ks82m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ks82m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ks82m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:27:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:27:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:27:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:27:41 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.52,StartTime:2020-05-12 17:27:41 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 17:27:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://cb929996ae78cda1aebf71222b32af38db128502888b35c460fc33e3e8f74a57,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.52,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:27:57.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4496" for this suite. • [SLOW TEST:25.722 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":188,"skipped":3097,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:27:57.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-5329 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5329 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5329 May 12 17:27:57.353: INFO: Found 0 stateful pods, waiting for 1 May 12 17:28:07.357: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 12 17:28:07.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5329 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 17:28:12.424: INFO: stderr: "I0512 17:28:12.190039 3438 log.go:172] (0xc000105080) (0xc00063fc20) Create stream\nI0512 17:28:12.190114 3438 log.go:172] (0xc000105080) (0xc00063fc20) Stream added, broadcasting: 1\nI0512 17:28:12.192384 3438 log.go:172] (0xc000105080) Reply frame received for 1\nI0512 17:28:12.192411 3438 log.go:172] (0xc000105080) (0xc000754000) Create stream\nI0512 17:28:12.192418 3438 log.go:172] (0xc000105080) (0xc000754000) Stream added, broadcasting: 3\nI0512 17:28:12.193296 3438 log.go:172] (0xc000105080) Reply frame received for 3\nI0512 17:28:12.193334 3438 log.go:172] (0xc000105080) (0xc0007540a0) Create stream\nI0512 17:28:12.193347 3438 log.go:172] (0xc000105080) (0xc0007540a0) Stream added, broadcasting: 5\nI0512 17:28:12.194262 3438 log.go:172] (0xc000105080) Reply frame received for 5\nI0512 17:28:12.301591 3438 log.go:172] (0xc000105080) Data frame received for 5\nI0512 17:28:12.301617 3438 log.go:172] (0xc0007540a0) (5) Data frame handling\nI0512 17:28:12.301637 3438 log.go:172] (0xc0007540a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 17:28:12.416532 3438 log.go:172] (0xc000105080) Data frame received for 3\nI0512 17:28:12.416586 3438 log.go:172] (0xc000754000) (3) Data frame handling\nI0512 17:28:12.416627 3438 log.go:172] (0xc000754000) (3) Data frame sent\nI0512 17:28:12.416650 3438 log.go:172] (0xc000105080) Data frame received for 3\nI0512 17:28:12.416682 3438 log.go:172] (0xc000754000) (3) Data frame handling\nI0512 17:28:12.416883 3438 log.go:172] (0xc000105080) Data frame received for 5\nI0512 17:28:12.416937 3438 log.go:172] (0xc0007540a0) (5) Data frame handling\nI0512 17:28:12.418491 3438 log.go:172] (0xc000105080) Data frame received for 1\nI0512 17:28:12.418519 3438 log.go:172] (0xc00063fc20) (1) Data frame handling\nI0512 17:28:12.418531 3438 log.go:172] (0xc00063fc20) (1) Data frame sent\nI0512 17:28:12.418543 3438 log.go:172] (0xc000105080) (0xc00063fc20) Stream removed, broadcasting: 1\nI0512 17:28:12.418556 3438 log.go:172] (0xc000105080) Go away received\nI0512 17:28:12.419032 3438 log.go:172] (0xc000105080) (0xc00063fc20) Stream removed, broadcasting: 1\nI0512 17:28:12.419058 3438 log.go:172] (0xc000105080) (0xc000754000) Stream removed, broadcasting: 3\nI0512 17:28:12.419093 3438 log.go:172] (0xc000105080) (0xc0007540a0) Stream removed, broadcasting: 5\n" May 12 17:28:12.424: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 17:28:12.424: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 17:28:12.427: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 12 17:28:22.431: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 17:28:22.431: INFO: Waiting for statefulset status.replicas updated to 0 May 12 17:28:22.483: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999614s May 12 17:28:23.487: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.961324928s May 12 17:28:24.489: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.957372223s May 12 17:28:25.520: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.954777158s May 12 17:28:26.523: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.924406077s May 12 17:28:27.526: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.920977453s May 12 17:28:28.530: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.917809463s May 12 17:28:29.742: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.913707036s May 12 17:28:30.745: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.70225178s May 12 17:28:31.749: INFO: Verifying statefulset ss doesn't scale past 1 for another 698.646213ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5329 May 12 17:28:32.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 17:28:32.984: INFO: stderr: "I0512 17:28:32.912812 3471 log.go:172] (0xc0002164d0) (0xc0006de640) Create stream\nI0512 17:28:32.912911 3471 log.go:172] (0xc0002164d0) (0xc0006de640) Stream added, broadcasting: 1\nI0512 17:28:32.915826 3471 log.go:172] (0xc0002164d0) Reply frame received for 1\nI0512 17:28:32.915871 3471 log.go:172] (0xc0002164d0) (0xc0006de6e0) Create stream\nI0512 17:28:32.915882 3471 log.go:172] (0xc0002164d0) (0xc0006de6e0) Stream added, broadcasting: 3\nI0512 17:28:32.916799 3471 log.go:172] (0xc0002164d0) Reply frame received for 3\nI0512 17:28:32.916829 3471 log.go:172] (0xc0002164d0) (0xc00094aa00) Create stream\nI0512 17:28:32.916836 3471 log.go:172] (0xc0002164d0) (0xc00094aa00) Stream added, broadcasting: 5\nI0512 17:28:32.917958 3471 log.go:172] (0xc0002164d0) Reply frame received for 5\nI0512 17:28:32.978954 3471 log.go:172] (0xc0002164d0) Data frame received for 3\nI0512 17:28:32.979007 3471 log.go:172] (0xc0002164d0) Data frame received for 5\nI0512 17:28:32.979036 3471 log.go:172] (0xc00094aa00) (5) Data frame handling\nI0512 17:28:32.979061 3471 log.go:172] (0xc00094aa00) (5) Data frame sent\nI0512 17:28:32.979070 3471 log.go:172] (0xc0002164d0) Data frame received for 5\nI0512 17:28:32.979078 3471 log.go:172] (0xc00094aa00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 17:28:32.979099 3471 log.go:172] (0xc0006de6e0) (3) Data frame handling\nI0512 17:28:32.979111 3471 log.go:172] (0xc0006de6e0) (3) Data frame sent\nI0512 17:28:32.979121 3471 log.go:172] (0xc0002164d0) Data frame received for 3\nI0512 17:28:32.979132 3471 log.go:172] (0xc0006de6e0) (3) Data frame handling\nI0512 17:28:32.980422 3471 log.go:172] (0xc0002164d0) Data frame received for 1\nI0512 17:28:32.980441 3471 log.go:172] (0xc0006de640) (1) Data frame handling\nI0512 17:28:32.980449 3471 log.go:172] (0xc0006de640) (1) Data frame sent\nI0512 17:28:32.980611 3471 log.go:172] (0xc0002164d0) (0xc0006de640) Stream removed, broadcasting: 1\nI0512 17:28:32.980754 3471 log.go:172] (0xc0002164d0) Go away received\nI0512 17:28:32.980851 3471 log.go:172] (0xc0002164d0) (0xc0006de640) Stream removed, broadcasting: 1\nI0512 17:28:32.980868 3471 log.go:172] (0xc0002164d0) (0xc0006de6e0) Stream removed, broadcasting: 3\nI0512 17:28:32.980881 3471 log.go:172] (0xc0002164d0) (0xc00094aa00) Stream removed, broadcasting: 5\n" May 12 17:28:32.984: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 17:28:32.984: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 17:28:32.987: INFO: Found 1 stateful pods, waiting for 3 May 12 17:28:42.991: INFO: Found 2 stateful pods, waiting for 3 May 12 17:28:53.006: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 12 17:28:53.006: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 12 17:28:53.006: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 12 17:28:53.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5329 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 17:28:53.222: INFO: stderr: "I0512 17:28:53.133936 3491 log.go:172] (0xc0000f62c0) (0xc0008c2000) Create stream\nI0512 17:28:53.133973 3491 log.go:172] (0xc0000f62c0) (0xc0008c2000) Stream added, broadcasting: 1\nI0512 17:28:53.136216 3491 log.go:172] (0xc0000f62c0) Reply frame received for 1\nI0512 17:28:53.136254 3491 log.go:172] (0xc0000f62c0) (0xc0008c20a0) Create stream\nI0512 17:28:53.136269 3491 log.go:172] (0xc0000f62c0) (0xc0008c20a0) Stream added, broadcasting: 3\nI0512 17:28:53.136929 3491 log.go:172] (0xc0000f62c0) Reply frame received for 3\nI0512 17:28:53.136966 3491 log.go:172] (0xc0000f62c0) (0xc0008c2140) Create stream\nI0512 17:28:53.136976 3491 log.go:172] (0xc0000f62c0) (0xc0008c2140) Stream added, broadcasting: 5\nI0512 17:28:53.137882 3491 log.go:172] (0xc0000f62c0) Reply frame received for 5\nI0512 17:28:53.208492 3491 log.go:172] (0xc0000f62c0) Data frame received for 5\nI0512 17:28:53.208506 3491 log.go:172] (0xc0008c2140) (5) Data frame handling\nI0512 17:28:53.208515 3491 log.go:172] (0xc0008c2140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 17:28:53.217279 3491 log.go:172] (0xc0000f62c0) Data frame received for 3\nI0512 17:28:53.217480 3491 log.go:172] (0xc0008c20a0) (3) Data frame handling\nI0512 17:28:53.217502 3491 log.go:172] (0xc0008c20a0) (3) Data frame sent\nI0512 17:28:53.217512 3491 log.go:172] (0xc0000f62c0) Data frame received for 3\nI0512 17:28:53.217524 3491 log.go:172] (0xc0000f62c0) Data frame received for 5\nI0512 17:28:53.217540 3491 log.go:172] (0xc0008c2140) (5) Data frame handling\nI0512 17:28:53.217553 3491 log.go:172] (0xc0008c20a0) (3) Data frame handling\nI0512 17:28:53.218754 3491 log.go:172] (0xc0000f62c0) Data frame received for 1\nI0512 17:28:53.218766 3491 log.go:172] (0xc0008c2000) (1) Data frame handling\nI0512 17:28:53.218772 3491 log.go:172] (0xc0008c2000) (1) Data frame sent\nI0512 17:28:53.218780 3491 log.go:172] (0xc0000f62c0) (0xc0008c2000) Stream removed, broadcasting: 1\nI0512 17:28:53.218859 3491 log.go:172] (0xc0000f62c0) Go away received\nI0512 17:28:53.218999 3491 log.go:172] (0xc0000f62c0) (0xc0008c2000) Stream removed, broadcasting: 1\nI0512 17:28:53.219014 3491 log.go:172] (0xc0000f62c0) (0xc0008c20a0) Stream removed, broadcasting: 3\nI0512 17:28:53.219024 3491 log.go:172] (0xc0000f62c0) (0xc0008c2140) Stream removed, broadcasting: 5\n" May 12 17:28:53.223: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 17:28:53.223: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 17:28:53.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5329 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 17:28:53.707: INFO: stderr: "I0512 17:28:53.559643 3512 log.go:172] (0xc000ae74a0) (0xc000b80820) Create stream\nI0512 17:28:53.559689 3512 log.go:172] (0xc000ae74a0) (0xc000b80820) Stream added, broadcasting: 1\nI0512 17:28:53.563186 3512 log.go:172] (0xc000ae74a0) Reply frame received for 1\nI0512 17:28:53.563218 3512 log.go:172] (0xc000ae74a0) (0xc00063e640) Create stream\nI0512 17:28:53.563225 3512 log.go:172] (0xc000ae74a0) (0xc00063e640) Stream added, broadcasting: 3\nI0512 17:28:53.563938 3512 log.go:172] (0xc000ae74a0) Reply frame received for 3\nI0512 17:28:53.563983 3512 log.go:172] (0xc000ae74a0) (0xc000401400) Create stream\nI0512 17:28:53.564002 3512 log.go:172] (0xc000ae74a0) (0xc000401400) Stream added, broadcasting: 5\nI0512 17:28:53.564694 3512 log.go:172] (0xc000ae74a0) Reply frame received for 5\nI0512 17:28:53.618676 3512 log.go:172] (0xc000ae74a0) Data frame received for 5\nI0512 17:28:53.618695 3512 log.go:172] (0xc000401400) (5) Data frame handling\nI0512 17:28:53.618704 3512 log.go:172] (0xc000401400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 17:28:53.701829 3512 log.go:172] (0xc000ae74a0) Data frame received for 5\nI0512 17:28:53.701857 3512 log.go:172] (0xc000401400) (5) Data frame handling\nI0512 17:28:53.701871 3512 log.go:172] (0xc000ae74a0) Data frame received for 3\nI0512 17:28:53.701876 3512 log.go:172] (0xc00063e640) (3) Data frame handling\nI0512 17:28:53.701882 3512 log.go:172] (0xc00063e640) (3) Data frame sent\nI0512 17:28:53.701886 3512 log.go:172] (0xc000ae74a0) Data frame received for 3\nI0512 17:28:53.701891 3512 log.go:172] (0xc00063e640) (3) Data frame handling\nI0512 17:28:53.703036 3512 log.go:172] (0xc000ae74a0) Data frame received for 1\nI0512 17:28:53.703058 3512 log.go:172] (0xc000b80820) (1) Data frame handling\nI0512 17:28:53.703067 3512 log.go:172] (0xc000b80820) (1) Data frame sent\nI0512 17:28:53.703076 3512 log.go:172] (0xc000ae74a0) (0xc000b80820) Stream removed, broadcasting: 1\nI0512 17:28:53.703087 3512 log.go:172] (0xc000ae74a0) Go away received\nI0512 17:28:53.703471 3512 log.go:172] (0xc000ae74a0) (0xc000b80820) Stream removed, broadcasting: 1\nI0512 17:28:53.703493 3512 log.go:172] (0xc000ae74a0) (0xc00063e640) Stream removed, broadcasting: 3\nI0512 17:28:53.703508 3512 log.go:172] (0xc000ae74a0) (0xc000401400) Stream removed, broadcasting: 5\n" May 12 17:28:53.708: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 17:28:53.708: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 17:28:53.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5329 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 12 17:28:54.033: INFO: stderr: "I0512 17:28:53.837732 3532 log.go:172] (0xc0000f42c0) (0xc0008d8000) Create stream\nI0512 17:28:53.837845 3532 log.go:172] (0xc0000f42c0) (0xc0008d8000) Stream added, broadcasting: 1\nI0512 17:28:53.841894 3532 log.go:172] (0xc0000f42c0) Reply frame received for 1\nI0512 17:28:53.841954 3532 log.go:172] (0xc0000f42c0) (0xc0006e7900) Create stream\nI0512 17:28:53.841979 3532 log.go:172] (0xc0000f42c0) (0xc0006e7900) Stream added, broadcasting: 3\nI0512 17:28:53.842956 3532 log.go:172] (0xc0000f42c0) Reply frame received for 3\nI0512 17:28:53.842995 3532 log.go:172] (0xc0000f42c0) (0xc000028000) Create stream\nI0512 17:28:53.843010 3532 log.go:172] (0xc0000f42c0) (0xc000028000) Stream added, broadcasting: 5\nI0512 17:28:53.844160 3532 log.go:172] (0xc0000f42c0) Reply frame received for 5\nI0512 17:28:53.910441 3532 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0512 17:28:53.910467 3532 log.go:172] (0xc000028000) (5) Data frame handling\nI0512 17:28:53.910482 3532 log.go:172] (0xc000028000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0512 17:28:54.026006 3532 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0512 17:28:54.026044 3532 log.go:172] (0xc000028000) (5) Data frame handling\nI0512 17:28:54.026072 3532 log.go:172] (0xc0000f42c0) Data frame received for 3\nI0512 17:28:54.026094 3532 log.go:172] (0xc0006e7900) (3) Data frame handling\nI0512 17:28:54.026107 3532 log.go:172] (0xc0006e7900) (3) Data frame sent\nI0512 17:28:54.026412 3532 log.go:172] (0xc0000f42c0) Data frame received for 3\nI0512 17:28:54.026436 3532 log.go:172] (0xc0006e7900) (3) Data frame handling\nI0512 17:28:54.028407 3532 log.go:172] (0xc0000f42c0) Data frame received for 1\nI0512 17:28:54.028423 3532 log.go:172] (0xc0008d8000) (1) Data frame handling\nI0512 17:28:54.028431 3532 log.go:172] (0xc0008d8000) (1) Data frame sent\nI0512 17:28:54.028706 3532 log.go:172] (0xc0000f42c0) (0xc0008d8000) Stream removed, broadcasting: 1\nI0512 17:28:54.028813 3532 log.go:172] (0xc0000f42c0) Go away received\nI0512 17:28:54.028953 3532 log.go:172] (0xc0000f42c0) (0xc0008d8000) Stream removed, broadcasting: 1\nI0512 17:28:54.028969 3532 log.go:172] (0xc0000f42c0) (0xc0006e7900) Stream removed, broadcasting: 3\nI0512 17:28:54.028979 3532 log.go:172] (0xc0000f42c0) (0xc000028000) Stream removed, broadcasting: 5\n" May 12 17:28:54.034: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 12 17:28:54.034: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 12 17:28:54.034: INFO: Waiting for statefulset status.replicas updated to 0 May 12 17:28:54.037: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 12 17:29:04.902: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 12 17:29:04.902: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 12 17:29:04.902: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 12 17:29:05.164: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999704s May 12 17:29:06.396: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.903368972s May 12 17:29:07.473: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.671683744s May 12 17:29:08.509: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.594800081s May 12 17:29:09.779: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.558251413s May 12 17:29:11.066: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.288733113s May 12 17:29:12.071: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.001100464s May 12 17:29:13.252: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.996782917s May 12 17:29:14.259: INFO: Verifying statefulset ss doesn't scale past 3 for another 815.371648ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5329 May 12 17:29:15.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5329 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 17:29:16.700: INFO: stderr: "I0512 17:29:16.609478 3552 log.go:172] (0xc000a66c60) (0xc0007083c0) Create stream\nI0512 17:29:16.609527 3552 log.go:172] (0xc000a66c60) (0xc0007083c0) Stream added, broadcasting: 1\nI0512 17:29:16.610952 3552 log.go:172] (0xc000a66c60) Reply frame received for 1\nI0512 17:29:16.610988 3552 log.go:172] (0xc000a66c60) (0xc0007f4280) Create stream\nI0512 17:29:16.611005 3552 log.go:172] (0xc000a66c60) (0xc0007f4280) Stream added, broadcasting: 3\nI0512 17:29:16.611751 3552 log.go:172] (0xc000a66c60) Reply frame received for 3\nI0512 17:29:16.611776 3552 log.go:172] (0xc000a66c60) (0xc0007000a0) Create stream\nI0512 17:29:16.611783 3552 log.go:172] (0xc000a66c60) (0xc0007000a0) Stream added, broadcasting: 5\nI0512 17:29:16.612411 3552 log.go:172] (0xc000a66c60) Reply frame received for 5\nI0512 17:29:16.694202 3552 log.go:172] (0xc000a66c60) Data frame received for 5\nI0512 17:29:16.694239 3552 log.go:172] (0xc0007000a0) (5) Data frame handling\nI0512 17:29:16.694251 3552 log.go:172] (0xc0007000a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 17:29:16.694264 3552 log.go:172] (0xc000a66c60) Data frame received for 3\nI0512 17:29:16.694293 3552 log.go:172] (0xc0007f4280) (3) Data frame handling\nI0512 17:29:16.694321 3552 log.go:172] (0xc000a66c60) Data frame received for 5\nI0512 17:29:16.694355 3552 log.go:172] (0xc0007000a0) (5) Data frame handling\nI0512 17:29:16.694400 3552 log.go:172] (0xc0007f4280) (3) Data frame sent\nI0512 17:29:16.694420 3552 log.go:172] (0xc000a66c60) Data frame received for 3\nI0512 17:29:16.694428 3552 log.go:172] (0xc0007f4280) (3) Data frame handling\nI0512 17:29:16.695528 3552 log.go:172] (0xc000a66c60) Data frame received for 1\nI0512 17:29:16.695550 3552 log.go:172] (0xc0007083c0) (1) Data frame handling\nI0512 17:29:16.695566 3552 log.go:172] (0xc0007083c0) (1) Data frame sent\nI0512 17:29:16.695586 3552 log.go:172] (0xc000a66c60) (0xc0007083c0) Stream removed, broadcasting: 1\nI0512 17:29:16.695606 3552 log.go:172] (0xc000a66c60) Go away received\nI0512 17:29:16.695963 3552 log.go:172] (0xc000a66c60) (0xc0007083c0) Stream removed, broadcasting: 1\nI0512 17:29:16.695985 3552 log.go:172] (0xc000a66c60) (0xc0007f4280) Stream removed, broadcasting: 3\nI0512 17:29:16.695994 3552 log.go:172] (0xc000a66c60) (0xc0007000a0) Stream removed, broadcasting: 5\n" May 12 17:29:16.700: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 17:29:16.700: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 17:29:16.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5329 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 17:29:16.891: INFO: stderr: "I0512 17:29:16.824853 3572 log.go:172] (0xc000104370) (0xc0005dee60) Create stream\nI0512 17:29:16.824905 3572 log.go:172] (0xc000104370) (0xc0005dee60) Stream added, broadcasting: 1\nI0512 17:29:16.837405 3572 log.go:172] (0xc000104370) Reply frame received for 1\nI0512 17:29:16.837447 3572 log.go:172] (0xc000104370) (0xc00031c000) Create stream\nI0512 17:29:16.837468 3572 log.go:172] (0xc000104370) (0xc00031c000) Stream added, broadcasting: 3\nI0512 17:29:16.840893 3572 log.go:172] (0xc000104370) Reply frame received for 3\nI0512 17:29:16.840970 3572 log.go:172] (0xc000104370) (0xc0006e35e0) Create stream\nI0512 17:29:16.841002 3572 log.go:172] (0xc000104370) (0xc0006e35e0) Stream added, broadcasting: 5\nI0512 17:29:16.842070 3572 log.go:172] (0xc000104370) Reply frame received for 5\nI0512 17:29:16.885870 3572 log.go:172] (0xc000104370) Data frame received for 3\nI0512 17:29:16.885892 3572 log.go:172] (0xc00031c000) (3) Data frame handling\nI0512 17:29:16.885910 3572 log.go:172] (0xc00031c000) (3) Data frame sent\nI0512 17:29:16.885927 3572 log.go:172] (0xc000104370) Data frame received for 3\nI0512 17:29:16.885932 3572 log.go:172] (0xc00031c000) (3) Data frame handling\nI0512 17:29:16.885960 3572 log.go:172] (0xc000104370) Data frame received for 5\nI0512 17:29:16.885983 3572 log.go:172] (0xc0006e35e0) (5) Data frame handling\nI0512 17:29:16.885999 3572 log.go:172] (0xc0006e35e0) (5) Data frame sent\nI0512 17:29:16.886016 3572 log.go:172] (0xc000104370) Data frame received for 5\nI0512 17:29:16.886026 3572 log.go:172] (0xc0006e35e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 17:29:16.887099 3572 log.go:172] (0xc000104370) Data frame received for 1\nI0512 17:29:16.887114 3572 log.go:172] (0xc0005dee60) (1) Data frame handling\nI0512 17:29:16.887123 3572 log.go:172] (0xc0005dee60) (1) Data frame sent\nI0512 17:29:16.887131 3572 log.go:172] (0xc000104370) (0xc0005dee60) Stream removed, broadcasting: 1\nI0512 17:29:16.887146 3572 log.go:172] (0xc000104370) Go away received\nI0512 17:29:16.887326 3572 log.go:172] (0xc000104370) (0xc0005dee60) Stream removed, broadcasting: 1\nI0512 17:29:16.887336 3572 log.go:172] (0xc000104370) (0xc00031c000) Stream removed, broadcasting: 3\nI0512 17:29:16.887341 3572 log.go:172] (0xc000104370) (0xc0006e35e0) Stream removed, broadcasting: 5\n" May 12 17:29:16.891: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 17:29:16.891: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 17:29:16.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5329 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 12 17:29:17.089: INFO: stderr: "I0512 17:29:17.007932 3589 log.go:172] (0xc00023ca50) (0xc000704780) Create stream\nI0512 17:29:17.007983 3589 log.go:172] (0xc00023ca50) (0xc000704780) Stream added, broadcasting: 1\nI0512 17:29:17.009931 3589 log.go:172] (0xc00023ca50) Reply frame received for 1\nI0512 17:29:17.009964 3589 log.go:172] (0xc00023ca50) (0xc0005806e0) Create stream\nI0512 17:29:17.009974 3589 log.go:172] (0xc00023ca50) (0xc0005806e0) Stream added, broadcasting: 3\nI0512 17:29:17.010561 3589 log.go:172] (0xc00023ca50) Reply frame received for 3\nI0512 17:29:17.010584 3589 log.go:172] (0xc00023ca50) (0xc000580780) Create stream\nI0512 17:29:17.010593 3589 log.go:172] (0xc00023ca50) (0xc000580780) Stream added, broadcasting: 5\nI0512 17:29:17.011360 3589 log.go:172] (0xc00023ca50) Reply frame received for 5\nI0512 17:29:17.067747 3589 log.go:172] (0xc00023ca50) Data frame received for 5\nI0512 17:29:17.067768 3589 log.go:172] (0xc000580780) (5) Data frame handling\nI0512 17:29:17.067780 3589 log.go:172] (0xc000580780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0512 17:29:17.084742 3589 log.go:172] (0xc00023ca50) Data frame received for 5\nI0512 17:29:17.084770 3589 log.go:172] (0xc000580780) (5) Data frame handling\nI0512 17:29:17.084787 3589 log.go:172] (0xc00023ca50) Data frame received for 3\nI0512 17:29:17.084795 3589 log.go:172] (0xc0005806e0) (3) Data frame handling\nI0512 17:29:17.084804 3589 log.go:172] (0xc0005806e0) (3) Data frame sent\nI0512 17:29:17.084817 3589 log.go:172] (0xc00023ca50) Data frame received for 3\nI0512 17:29:17.084828 3589 log.go:172] (0xc0005806e0) (3) Data frame handling\nI0512 17:29:17.086025 3589 log.go:172] (0xc00023ca50) Data frame received for 1\nI0512 17:29:17.086037 3589 log.go:172] (0xc000704780) (1) Data frame handling\nI0512 17:29:17.086045 3589 log.go:172] (0xc000704780) (1) Data frame sent\nI0512 17:29:17.086056 3589 log.go:172] (0xc00023ca50) (0xc000704780) Stream removed, broadcasting: 1\nI0512 17:29:17.086307 3589 log.go:172] (0xc00023ca50) (0xc000704780) Stream removed, broadcasting: 1\nI0512 17:29:17.086319 3589 log.go:172] (0xc00023ca50) (0xc0005806e0) Stream removed, broadcasting: 3\nI0512 17:29:17.086359 3589 log.go:172] (0xc00023ca50) Go away received\nI0512 17:29:17.086394 3589 log.go:172] (0xc00023ca50) (0xc000580780) Stream removed, broadcasting: 5\n" May 12 17:29:17.089: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 12 17:29:17.089: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 12 17:29:17.089: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 12 17:29:57.241: INFO: Deleting all statefulset in ns statefulset-5329 May 12 17:29:57.244: INFO: Scaling statefulset ss to 0 May 12 17:29:57.253: INFO: Waiting for statefulset status.replicas updated to 0 May 12 17:29:57.255: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:29:57.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5329" for this suite. • [SLOW TEST:121.123 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":189,"skipped":3108,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:29:58.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 17:30:00.151: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 17:30:02.161: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901400, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901400, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901400, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901400, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:30:04.247: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901400, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901400, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901400, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901400, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:30:06.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901400, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901400, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901400, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901400, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 17:30:09.569: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:30:14.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5791" for this suite. STEP: Destroying namespace "webhook-5791-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.696 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":190,"skipped":3115,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:30:15.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 12 17:30:24.944: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:30:25.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7484" for this suite. • [SLOW TEST:9.772 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3121,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:30:25.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:30:26.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9800" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3133,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:30:26.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:30:43.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8433" for this suite. • [SLOW TEST:17.163 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":193,"skipped":3143,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:30:43.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:30:49.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3125" for this suite. STEP: Destroying namespace "nsdeletetest-3267" for this suite. May 12 17:30:49.993: INFO: Namespace nsdeletetest-3267 was already deleted STEP: Destroying namespace "nsdeletetest-8298" for this suite. • [SLOW TEST:6.368 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":194,"skipped":3162,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:30:49.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode May 12 17:30:50.055: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9610" to be "success or failure" May 12 17:30:50.075: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 20.0992ms May 12 17:30:52.079: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024340681s May 12 17:30:54.468: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.413298375s May 12 17:30:56.472: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 6.417313002s May 12 17:30:58.475: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.420580108s STEP: Saw pod success May 12 17:30:58.475: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 12 17:30:58.478: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 12 17:30:58.810: INFO: Waiting for pod pod-host-path-test to disappear May 12 17:30:59.183: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:30:59.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9610" for this suite. • [SLOW TEST:9.191 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3174,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:30:59.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 12 17:30:59.647: INFO: Waiting up to 5m0s for pod "pod-ca2b6425-7f85-466c-9718-30007f95a368" in namespace "emptydir-8532" to be "success or failure" May 12 17:31:00.344: INFO: Pod "pod-ca2b6425-7f85-466c-9718-30007f95a368": Phase="Pending", Reason="", readiness=false. Elapsed: 697.379804ms May 12 17:31:02.348: INFO: Pod "pod-ca2b6425-7f85-466c-9718-30007f95a368": Phase="Pending", Reason="", readiness=false. Elapsed: 2.700909009s May 12 17:31:04.402: INFO: Pod "pod-ca2b6425-7f85-466c-9718-30007f95a368": Phase="Pending", Reason="", readiness=false. Elapsed: 4.755094742s May 12 17:31:06.405: INFO: Pod "pod-ca2b6425-7f85-466c-9718-30007f95a368": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.758350132s STEP: Saw pod success May 12 17:31:06.405: INFO: Pod "pod-ca2b6425-7f85-466c-9718-30007f95a368" satisfied condition "success or failure" May 12 17:31:06.407: INFO: Trying to get logs from node jerma-worker2 pod pod-ca2b6425-7f85-466c-9718-30007f95a368 container test-container: STEP: delete the pod May 12 17:31:06.473: INFO: Waiting for pod pod-ca2b6425-7f85-466c-9718-30007f95a368 to disappear May 12 17:31:06.482: INFO: Pod pod-ca2b6425-7f85-466c-9718-30007f95a368 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:31:06.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8532" for this suite. • [SLOW TEST:7.299 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3177,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:31:06.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9501 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9501;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9501 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9501;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9501.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9501.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9501.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9501.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9501.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9501.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9501.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9501.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9501.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9501.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9501.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9501.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9501.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 137.142.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.142.137_udp@PTR;check="$$(dig +tcp +noall +answer +search 137.142.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.142.137_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9501 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9501;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9501 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9501;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9501.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9501.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9501.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9501.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9501.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9501.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9501.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9501.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9501.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9501.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9501.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9501.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9501.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 137.142.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.142.137_udp@PTR;check="$$(dig +tcp +noall +answer +search 137.142.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.142.137_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 17:31:16.944: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:17.091: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:17.094: INFO: Unable to read wheezy_udp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:17.096: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:17.098: INFO: Unable to read wheezy_udp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:17.100: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:17.102: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:17.104: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:17.121: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:17.123: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:17.125: INFO: Unable to read jessie_udp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:17.127: INFO: Unable to read jessie_tcp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:17.129: INFO: Unable to read jessie_udp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:17.131: INFO: Unable to read jessie_tcp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:17.133: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:17.135: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:17.149: INFO: Lookups using dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9501 wheezy_tcp@dns-test-service.dns-9501 wheezy_udp@dns-test-service.dns-9501.svc wheezy_tcp@dns-test-service.dns-9501.svc wheezy_udp@_http._tcp.dns-test-service.dns-9501.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9501.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9501 jessie_tcp@dns-test-service.dns-9501 jessie_udp@dns-test-service.dns-9501.svc jessie_tcp@dns-test-service.dns-9501.svc jessie_udp@_http._tcp.dns-test-service.dns-9501.svc jessie_tcp@_http._tcp.dns-test-service.dns-9501.svc] May 12 17:31:22.153: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:22.156: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:22.159: INFO: Unable to read wheezy_udp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:22.163: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:22.165: INFO: Unable to read wheezy_udp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:22.168: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:22.170: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:22.172: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:22.187: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:22.189: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:22.191: INFO: Unable to read jessie_udp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:22.194: INFO: Unable to read jessie_tcp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:22.196: INFO: Unable to read jessie_udp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:22.199: INFO: Unable to read jessie_tcp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:22.201: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:22.204: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:22.218: INFO: Lookups using dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9501 wheezy_tcp@dns-test-service.dns-9501 wheezy_udp@dns-test-service.dns-9501.svc wheezy_tcp@dns-test-service.dns-9501.svc wheezy_udp@_http._tcp.dns-test-service.dns-9501.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9501.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9501 jessie_tcp@dns-test-service.dns-9501 jessie_udp@dns-test-service.dns-9501.svc jessie_tcp@dns-test-service.dns-9501.svc jessie_udp@_http._tcp.dns-test-service.dns-9501.svc jessie_tcp@_http._tcp.dns-test-service.dns-9501.svc] May 12 17:31:27.154: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:27.158: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:27.161: INFO: Unable to read wheezy_udp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:27.164: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:27.167: INFO: Unable to read wheezy_udp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:27.170: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:27.172: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:27.174: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:27.207: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:27.211: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:27.214: INFO: Unable to read jessie_udp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:27.217: INFO: Unable to read jessie_tcp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:27.220: INFO: Unable to read jessie_udp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:27.223: INFO: Unable to read jessie_tcp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:27.228: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:27.230: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:27.248: INFO: Lookups using dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9501 wheezy_tcp@dns-test-service.dns-9501 wheezy_udp@dns-test-service.dns-9501.svc wheezy_tcp@dns-test-service.dns-9501.svc wheezy_udp@_http._tcp.dns-test-service.dns-9501.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9501.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9501 jessie_tcp@dns-test-service.dns-9501 jessie_udp@dns-test-service.dns-9501.svc jessie_tcp@dns-test-service.dns-9501.svc jessie_udp@_http._tcp.dns-test-service.dns-9501.svc jessie_tcp@_http._tcp.dns-test-service.dns-9501.svc] May 12 17:31:32.152: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:32.154: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:32.156: INFO: Unable to read wheezy_udp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:32.158: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:32.160: INFO: Unable to read wheezy_udp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:32.163: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:32.164: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:32.167: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:32.190: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:32.191: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:32.193: INFO: Unable to read jessie_udp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:32.195: INFO: Unable to read jessie_tcp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:32.197: INFO: Unable to read jessie_udp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:32.199: INFO: Unable to read jessie_tcp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:32.201: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:32.203: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:32.216: INFO: Lookups using dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9501 wheezy_tcp@dns-test-service.dns-9501 wheezy_udp@dns-test-service.dns-9501.svc wheezy_tcp@dns-test-service.dns-9501.svc wheezy_udp@_http._tcp.dns-test-service.dns-9501.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9501.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9501 jessie_tcp@dns-test-service.dns-9501 jessie_udp@dns-test-service.dns-9501.svc jessie_tcp@dns-test-service.dns-9501.svc jessie_udp@_http._tcp.dns-test-service.dns-9501.svc jessie_tcp@_http._tcp.dns-test-service.dns-9501.svc] May 12 17:31:37.175: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:37.177: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:37.179: INFO: Unable to read wheezy_udp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:37.180: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:37.182: INFO: Unable to read wheezy_udp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:37.184: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:37.186: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:37.188: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:37.221: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:37.223: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:37.225: INFO: Unable to read jessie_udp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:37.227: INFO: Unable to read jessie_tcp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:37.228: INFO: Unable to read jessie_udp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:37.230: INFO: Unable to read jessie_tcp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:37.232: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:37.234: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:37.246: INFO: Lookups using dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9501 wheezy_tcp@dns-test-service.dns-9501 wheezy_udp@dns-test-service.dns-9501.svc wheezy_tcp@dns-test-service.dns-9501.svc wheezy_udp@_http._tcp.dns-test-service.dns-9501.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9501.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9501 jessie_tcp@dns-test-service.dns-9501 jessie_udp@dns-test-service.dns-9501.svc jessie_tcp@dns-test-service.dns-9501.svc jessie_udp@_http._tcp.dns-test-service.dns-9501.svc jessie_tcp@_http._tcp.dns-test-service.dns-9501.svc] May 12 17:31:42.154: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:42.158: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:42.160: INFO: Unable to read wheezy_udp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:42.163: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:42.166: INFO: Unable to read wheezy_udp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:42.169: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:42.172: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:42.175: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:42.194: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:42.198: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:42.201: INFO: Unable to read jessie_udp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:42.204: INFO: Unable to read jessie_tcp@dns-test-service.dns-9501 from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:42.206: INFO: Unable to read jessie_udp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:42.209: INFO: Unable to read jessie_tcp@dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:42.211: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:42.214: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9501.svc from pod dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043: the server could not find the requested resource (get pods dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043) May 12 17:31:42.227: INFO: Lookups using dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9501 wheezy_tcp@dns-test-service.dns-9501 wheezy_udp@dns-test-service.dns-9501.svc wheezy_tcp@dns-test-service.dns-9501.svc wheezy_udp@_http._tcp.dns-test-service.dns-9501.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9501.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9501 jessie_tcp@dns-test-service.dns-9501 jessie_udp@dns-test-service.dns-9501.svc jessie_tcp@dns-test-service.dns-9501.svc jessie_udp@_http._tcp.dns-test-service.dns-9501.svc jessie_tcp@_http._tcp.dns-test-service.dns-9501.svc] May 12 17:31:47.371: INFO: DNS probes using dns-9501/dns-test-84f8096c-a4cb-4ea5-9498-7b2bba6f6043 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:31:48.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9501" for this suite. • [SLOW TEST:42.496 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":197,"skipped":3177,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:31:48.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 12 17:31:58.728: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1506 PodName:pod-sharedvolume-4d4791ed-c8a3-488a-a6d9-35ad35cd1dfb ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 17:31:58.728: INFO: >>> kubeConfig: /root/.kube/config I0512 17:31:58.765707 7 log.go:172] (0xc0023080b0) (0xc0026ee780) Create stream I0512 17:31:58.765741 7 log.go:172] (0xc0023080b0) (0xc0026ee780) Stream added, broadcasting: 1 I0512 17:31:58.767838 7 log.go:172] (0xc0023080b0) Reply frame received for 1 I0512 17:31:58.767905 7 log.go:172] (0xc0023080b0) (0xc0027ce8c0) Create stream I0512 17:31:58.767931 7 log.go:172] (0xc0023080b0) (0xc0027ce8c0) Stream added, broadcasting: 3 I0512 17:31:58.768990 7 log.go:172] (0xc0023080b0) Reply frame received for 3 I0512 17:31:58.769049 7 log.go:172] (0xc0023080b0) (0xc0026ee820) Create stream I0512 17:31:58.769071 7 log.go:172] (0xc0023080b0) (0xc0026ee820) Stream added, broadcasting: 5 I0512 17:31:58.770486 7 log.go:172] (0xc0023080b0) Reply frame received for 5 I0512 17:31:58.845414 7 log.go:172] (0xc0023080b0) Data frame received for 5 I0512 17:31:58.845452 7 log.go:172] (0xc0026ee820) (5) Data frame handling I0512 17:31:58.845515 7 log.go:172] (0xc0023080b0) Data frame received for 3 I0512 17:31:58.845622 7 log.go:172] (0xc0027ce8c0) (3) Data frame handling I0512 17:31:58.845660 7 log.go:172] (0xc0027ce8c0) (3) Data frame sent I0512 17:31:58.845679 7 log.go:172] (0xc0023080b0) Data frame received for 3 I0512 17:31:58.845694 7 log.go:172] (0xc0027ce8c0) (3) Data frame handling I0512 17:31:58.847416 7 log.go:172] (0xc0023080b0) Data frame received for 1 I0512 17:31:58.847450 7 log.go:172] (0xc0026ee780) (1) Data frame handling I0512 17:31:58.847477 7 log.go:172] (0xc0026ee780) (1) Data frame sent I0512 17:31:58.847499 7 log.go:172] (0xc0023080b0) (0xc0026ee780) Stream removed, broadcasting: 1 I0512 17:31:58.847522 7 log.go:172] (0xc0023080b0) Go away received I0512 17:31:58.847626 7 log.go:172] (0xc0023080b0) (0xc0026ee780) Stream removed, broadcasting: 1 I0512 17:31:58.847657 7 log.go:172] (0xc0023080b0) (0xc0027ce8c0) Stream removed, broadcasting: 3 I0512 17:31:58.847672 7 log.go:172] (0xc0023080b0) (0xc0026ee820) Stream removed, broadcasting: 5 May 12 17:31:58.847: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:31:58.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1506" for this suite. • [SLOW TEST:9.872 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":198,"skipped":3192,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:31:58.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:32:15.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-67" for this suite. • [SLOW TEST:16.920 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":199,"skipped":3257,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:32:15.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:32:15.942: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 12 17:32:17.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8845 create -f -' May 12 17:32:23.844: INFO: stderr: "" May 12 17:32:23.844: INFO: stdout: "e2e-test-crd-publish-openapi-6744-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 12 17:32:23.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8845 delete e2e-test-crd-publish-openapi-6744-crds test-cr' May 12 17:32:23.983: INFO: stderr: "" May 12 17:32:23.983: INFO: stdout: "e2e-test-crd-publish-openapi-6744-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 12 17:32:23.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8845 apply -f -' May 12 17:32:24.294: INFO: stderr: "" May 12 17:32:24.294: INFO: stdout: "e2e-test-crd-publish-openapi-6744-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 12 17:32:24.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8845 delete e2e-test-crd-publish-openapi-6744-crds test-cr' May 12 17:32:24.419: INFO: stderr: "" May 12 17:32:24.419: INFO: stdout: "e2e-test-crd-publish-openapi-6744-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 12 17:32:24.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6744-crds' May 12 17:32:24.688: INFO: stderr: "" May 12 17:32:24.688: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6744-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:32:26.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8845" for this suite. • [SLOW TEST:10.780 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":200,"skipped":3263,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:32:26.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:32:26.637: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:32:27.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4283" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":201,"skipped":3271,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:32:27.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-9451 STEP: creating replication controller nodeport-test in namespace services-9451 I0512 17:32:27.697099 7 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-9451, replica count: 2 I0512 17:32:30.747699 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0512 17:32:33.747923 7 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 12 17:32:33.747: INFO: Creating new exec pod May 12 17:32:38.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9451 execpodmn4vq -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 12 17:32:38.987: INFO: stderr: "I0512 17:32:38.920464 3708 log.go:172] (0xc000574d10) (0xc0002b9720) Create stream\nI0512 17:32:38.920533 3708 log.go:172] (0xc000574d10) (0xc0002b9720) Stream added, broadcasting: 1\nI0512 17:32:38.922760 3708 log.go:172] (0xc000574d10) Reply frame received for 1\nI0512 17:32:38.922790 3708 log.go:172] (0xc000574d10) (0xc00097c000) Create stream\nI0512 17:32:38.922802 3708 log.go:172] (0xc000574d10) (0xc00097c000) Stream added, broadcasting: 3\nI0512 17:32:38.923692 3708 log.go:172] (0xc000574d10) Reply frame received for 3\nI0512 17:32:38.923739 3708 log.go:172] (0xc000574d10) (0xc0009d8000) Create stream\nI0512 17:32:38.923755 3708 log.go:172] (0xc000574d10) (0xc0009d8000) Stream added, broadcasting: 5\nI0512 17:32:38.924560 3708 log.go:172] (0xc000574d10) Reply frame received for 5\nI0512 17:32:38.980498 3708 log.go:172] (0xc000574d10) Data frame received for 5\nI0512 17:32:38.980516 3708 log.go:172] (0xc0009d8000) (5) Data frame handling\nI0512 17:32:38.980528 3708 log.go:172] (0xc0009d8000) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0512 17:32:38.980963 3708 log.go:172] (0xc000574d10) Data frame received for 5\nI0512 17:32:38.980991 3708 log.go:172] (0xc0009d8000) (5) Data frame handling\nI0512 17:32:38.981006 3708 log.go:172] (0xc0009d8000) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0512 17:32:38.981294 3708 log.go:172] (0xc000574d10) Data frame received for 5\nI0512 17:32:38.981316 3708 log.go:172] (0xc0009d8000) (5) Data frame handling\nI0512 17:32:38.981629 3708 log.go:172] (0xc000574d10) Data frame received for 3\nI0512 17:32:38.981644 3708 log.go:172] (0xc00097c000) (3) Data frame handling\nI0512 17:32:38.982901 3708 log.go:172] (0xc000574d10) Data frame received for 1\nI0512 17:32:38.982909 3708 log.go:172] (0xc0002b9720) (1) Data frame handling\nI0512 17:32:38.982916 3708 log.go:172] (0xc0002b9720) (1) Data frame sent\nI0512 17:32:38.982923 3708 log.go:172] (0xc000574d10) (0xc0002b9720) Stream removed, broadcasting: 1\nI0512 17:32:38.982991 3708 log.go:172] (0xc000574d10) Go away received\nI0512 17:32:38.983126 3708 log.go:172] (0xc000574d10) (0xc0002b9720) Stream removed, broadcasting: 1\nI0512 17:32:38.983134 3708 log.go:172] (0xc000574d10) (0xc00097c000) Stream removed, broadcasting: 3\nI0512 17:32:38.983139 3708 log.go:172] (0xc000574d10) (0xc0009d8000) Stream removed, broadcasting: 5\n" May 12 17:32:38.987: INFO: stdout: "" May 12 17:32:38.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9451 execpodmn4vq -- /bin/sh -x -c nc -zv -t -w 2 10.98.74.119 80' May 12 17:32:39.164: INFO: stderr: "I0512 17:32:39.107783 3729 log.go:172] (0xc000ae1130) (0xc000bfa500) Create stream\nI0512 17:32:39.107836 3729 log.go:172] (0xc000ae1130) (0xc000bfa500) Stream added, broadcasting: 1\nI0512 17:32:39.111033 3729 log.go:172] (0xc000ae1130) Reply frame received for 1\nI0512 17:32:39.111086 3729 log.go:172] (0xc000ae1130) (0xc000809ea0) Create stream\nI0512 17:32:39.111104 3729 log.go:172] (0xc000ae1130) (0xc000809ea0) Stream added, broadcasting: 3\nI0512 17:32:39.111957 3729 log.go:172] (0xc000ae1130) Reply frame received for 3\nI0512 17:32:39.112003 3729 log.go:172] (0xc000ae1130) (0xc0006070e0) Create stream\nI0512 17:32:39.112019 3729 log.go:172] (0xc000ae1130) (0xc0006070e0) Stream added, broadcasting: 5\nI0512 17:32:39.112890 3729 log.go:172] (0xc000ae1130) Reply frame received for 5\nI0512 17:32:39.157736 3729 log.go:172] (0xc000ae1130) Data frame received for 5\nI0512 17:32:39.157755 3729 log.go:172] (0xc0006070e0) (5) Data frame handling\nI0512 17:32:39.157769 3729 log.go:172] (0xc0006070e0) (5) Data frame sent\n+ nc -zv -t -w 2 10.98.74.119 80\nConnection to 10.98.74.119 80 port [tcp/http] succeeded!\nI0512 17:32:39.157962 3729 log.go:172] (0xc000ae1130) Data frame received for 5\nI0512 17:32:39.157984 3729 log.go:172] (0xc0006070e0) (5) Data frame handling\nI0512 17:32:39.158476 3729 log.go:172] (0xc000ae1130) Data frame received for 3\nI0512 17:32:39.158522 3729 log.go:172] (0xc000809ea0) (3) Data frame handling\nI0512 17:32:39.159711 3729 log.go:172] (0xc000ae1130) Data frame received for 1\nI0512 17:32:39.159870 3729 log.go:172] (0xc000bfa500) (1) Data frame handling\nI0512 17:32:39.159899 3729 log.go:172] (0xc000bfa500) (1) Data frame sent\nI0512 17:32:39.159918 3729 log.go:172] (0xc000ae1130) (0xc000bfa500) Stream removed, broadcasting: 1\nI0512 17:32:39.159937 3729 log.go:172] (0xc000ae1130) Go away received\nI0512 17:32:39.160193 3729 log.go:172] (0xc000ae1130) (0xc000bfa500) Stream removed, broadcasting: 1\nI0512 17:32:39.160206 3729 log.go:172] (0xc000ae1130) (0xc000809ea0) Stream removed, broadcasting: 3\nI0512 17:32:39.160213 3729 log.go:172] (0xc000ae1130) (0xc0006070e0) Stream removed, broadcasting: 5\n" May 12 17:32:39.164: INFO: stdout: "" May 12 17:32:39.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9451 execpodmn4vq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30147' May 12 17:32:39.337: INFO: stderr: "I0512 17:32:39.273996 3750 log.go:172] (0xc000a36000) (0xc000750000) Create stream\nI0512 17:32:39.274035 3750 log.go:172] (0xc000a36000) (0xc000750000) Stream added, broadcasting: 1\nI0512 17:32:39.275281 3750 log.go:172] (0xc000a36000) Reply frame received for 1\nI0512 17:32:39.275307 3750 log.go:172] (0xc000a36000) (0xc0006a01e0) Create stream\nI0512 17:32:39.275315 3750 log.go:172] (0xc000a36000) (0xc0006a01e0) Stream added, broadcasting: 3\nI0512 17:32:39.276116 3750 log.go:172] (0xc000a36000) Reply frame received for 3\nI0512 17:32:39.276143 3750 log.go:172] (0xc000a36000) (0xc0008c4000) Create stream\nI0512 17:32:39.276152 3750 log.go:172] (0xc000a36000) (0xc0008c4000) Stream added, broadcasting: 5\nI0512 17:32:39.276975 3750 log.go:172] (0xc000a36000) Reply frame received for 5\nI0512 17:32:39.331390 3750 log.go:172] (0xc000a36000) Data frame received for 5\nI0512 17:32:39.331515 3750 log.go:172] (0xc0008c4000) (5) Data frame handling\nI0512 17:32:39.331608 3750 log.go:172] (0xc0008c4000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 30147\nConnection to 172.17.0.10 30147 port [tcp/30147] succeeded!\nI0512 17:32:39.331781 3750 log.go:172] (0xc000a36000) Data frame received for 3\nI0512 17:32:39.331815 3750 log.go:172] (0xc0006a01e0) (3) Data frame handling\nI0512 17:32:39.331877 3750 log.go:172] (0xc000a36000) Data frame received for 5\nI0512 17:32:39.331898 3750 log.go:172] (0xc0008c4000) (5) Data frame handling\nI0512 17:32:39.333082 3750 log.go:172] (0xc000a36000) Data frame received for 1\nI0512 17:32:39.333127 3750 log.go:172] (0xc000750000) (1) Data frame handling\nI0512 17:32:39.333298 3750 log.go:172] (0xc000750000) (1) Data frame sent\nI0512 17:32:39.333313 3750 log.go:172] (0xc000a36000) (0xc000750000) Stream removed, broadcasting: 1\nI0512 17:32:39.333328 3750 log.go:172] (0xc000a36000) Go away received\nI0512 17:32:39.333745 3750 log.go:172] (0xc000a36000) (0xc000750000) Stream removed, broadcasting: 1\nI0512 17:32:39.333769 3750 log.go:172] (0xc000a36000) (0xc0006a01e0) Stream removed, broadcasting: 3\nI0512 17:32:39.333780 3750 log.go:172] (0xc000a36000) (0xc0008c4000) Stream removed, broadcasting: 5\n" May 12 17:32:39.337: INFO: stdout: "" May 12 17:32:39.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9451 execpodmn4vq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30147' May 12 17:32:39.517: INFO: stderr: "I0512 17:32:39.449417 3769 log.go:172] (0xc000a88b00) (0xc0007303c0) Create stream\nI0512 17:32:39.449466 3769 log.go:172] (0xc000a88b00) (0xc0007303c0) Stream added, broadcasting: 1\nI0512 17:32:39.450810 3769 log.go:172] (0xc000a88b00) Reply frame received for 1\nI0512 17:32:39.450847 3769 log.go:172] (0xc000a88b00) (0xc000a02000) Create stream\nI0512 17:32:39.450858 3769 log.go:172] (0xc000a88b00) (0xc000a02000) Stream added, broadcasting: 3\nI0512 17:32:39.451650 3769 log.go:172] (0xc000a88b00) Reply frame received for 3\nI0512 17:32:39.451675 3769 log.go:172] (0xc000a88b00) (0xc000732000) Create stream\nI0512 17:32:39.451687 3769 log.go:172] (0xc000a88b00) (0xc000732000) Stream added, broadcasting: 5\nI0512 17:32:39.452344 3769 log.go:172] (0xc000a88b00) Reply frame received for 5\nI0512 17:32:39.513758 3769 log.go:172] (0xc000a88b00) Data frame received for 3\nI0512 17:32:39.513786 3769 log.go:172] (0xc000a02000) (3) Data frame handling\nI0512 17:32:39.513799 3769 log.go:172] (0xc000a88b00) Data frame received for 5\nI0512 17:32:39.513803 3769 log.go:172] (0xc000732000) (5) Data frame handling\nI0512 17:32:39.513808 3769 log.go:172] (0xc000732000) (5) Data frame sent\nI0512 17:32:39.513813 3769 log.go:172] (0xc000a88b00) Data frame received for 5\nI0512 17:32:39.513816 3769 log.go:172] (0xc000732000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30147\nConnection to 172.17.0.8 30147 port [tcp/30147] succeeded!\nI0512 17:32:39.514686 3769 log.go:172] (0xc000a88b00) Data frame received for 1\nI0512 17:32:39.514698 3769 log.go:172] (0xc0007303c0) (1) Data frame handling\nI0512 17:32:39.514706 3769 log.go:172] (0xc0007303c0) (1) Data frame sent\nI0512 17:32:39.514713 3769 log.go:172] (0xc000a88b00) (0xc0007303c0) Stream removed, broadcasting: 1\nI0512 17:32:39.514773 3769 log.go:172] (0xc000a88b00) Go away received\nI0512 17:32:39.514925 3769 log.go:172] (0xc000a88b00) (0xc0007303c0) Stream removed, broadcasting: 1\nI0512 17:32:39.514936 3769 log.go:172] (0xc000a88b00) (0xc000a02000) Stream removed, broadcasting: 3\nI0512 17:32:39.514941 3769 log.go:172] (0xc000a88b00) (0xc000732000) Stream removed, broadcasting: 5\n" May 12 17:32:39.517: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:32:39.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9451" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.292 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":202,"skipped":3294,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:32:39.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 12 17:32:39.612: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4bc349e5-4f84-4cf5-aa53-b7a106939033" in namespace "projected-5295" to be "success or failure" May 12 17:32:39.634: INFO: Pod "downwardapi-volume-4bc349e5-4f84-4cf5-aa53-b7a106939033": Phase="Pending", Reason="", readiness=false. Elapsed: 21.805465ms May 12 17:32:41.642: INFO: Pod "downwardapi-volume-4bc349e5-4f84-4cf5-aa53-b7a106939033": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030004558s May 12 17:32:43.645: INFO: Pod "downwardapi-volume-4bc349e5-4f84-4cf5-aa53-b7a106939033": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033001211s STEP: Saw pod success May 12 17:32:43.645: INFO: Pod "downwardapi-volume-4bc349e5-4f84-4cf5-aa53-b7a106939033" satisfied condition "success or failure" May 12 17:32:43.648: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-4bc349e5-4f84-4cf5-aa53-b7a106939033 container client-container: STEP: delete the pod May 12 17:32:44.020: INFO: Waiting for pod downwardapi-volume-4bc349e5-4f84-4cf5-aa53-b7a106939033 to disappear May 12 17:32:44.044: INFO: Pod downwardapi-volume-4bc349e5-4f84-4cf5-aa53-b7a106939033 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:32:44.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5295" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3294,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:32:44.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 12 17:32:44.285: INFO: >>> kubeConfig: /root/.kube/config May 12 17:32:47.605: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:32:58.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8012" for this suite. • [SLOW TEST:14.388 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":204,"skipped":3354,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:32:58.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 12 17:32:58.685: INFO: Waiting up to 5m0s for pod "downwardapi-volume-43bf1bcd-92a2-46c8-9d6d-a273b52e9249" in namespace "downward-api-1087" to be "success or failure" May 12 17:32:58.822: INFO: Pod "downwardapi-volume-43bf1bcd-92a2-46c8-9d6d-a273b52e9249": Phase="Pending", Reason="", readiness=false. Elapsed: 136.917387ms May 12 17:33:00.826: INFO: Pod "downwardapi-volume-43bf1bcd-92a2-46c8-9d6d-a273b52e9249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140590035s May 12 17:33:03.035: INFO: Pod "downwardapi-volume-43bf1bcd-92a2-46c8-9d6d-a273b52e9249": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.349442619s STEP: Saw pod success May 12 17:33:03.035: INFO: Pod "downwardapi-volume-43bf1bcd-92a2-46c8-9d6d-a273b52e9249" satisfied condition "success or failure" May 12 17:33:03.086: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-43bf1bcd-92a2-46c8-9d6d-a273b52e9249 container client-container: STEP: delete the pod May 12 17:33:03.296: INFO: Waiting for pod downwardapi-volume-43bf1bcd-92a2-46c8-9d6d-a273b52e9249 to disappear May 12 17:33:03.620: INFO: Pod downwardapi-volume-43bf1bcd-92a2-46c8-9d6d-a273b52e9249 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:33:03.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1087" for this suite. • [SLOW TEST:5.504 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3373,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:33:03.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-9eb1c70c-fabe-4053-a5fb-6872697f87d3 STEP: Creating a pod to test consume secrets May 12 17:33:04.674: INFO: Waiting up to 5m0s for pod "pod-secrets-91fa009d-b58e-40f5-8f00-d07b72b8b795" in namespace "secrets-2921" to be "success or failure" May 12 17:33:04.677: INFO: Pod "pod-secrets-91fa009d-b58e-40f5-8f00-d07b72b8b795": Phase="Pending", Reason="", readiness=false. Elapsed: 2.872281ms May 12 17:33:06.680: INFO: Pod "pod-secrets-91fa009d-b58e-40f5-8f00-d07b72b8b795": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006186013s May 12 17:33:08.771: INFO: Pod "pod-secrets-91fa009d-b58e-40f5-8f00-d07b72b8b795": Phase="Running", Reason="", readiness=true. Elapsed: 4.097497757s May 12 17:33:10.883: INFO: Pod "pod-secrets-91fa009d-b58e-40f5-8f00-d07b72b8b795": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.209108125s STEP: Saw pod success May 12 17:33:10.883: INFO: Pod "pod-secrets-91fa009d-b58e-40f5-8f00-d07b72b8b795" satisfied condition "success or failure" May 12 17:33:10.886: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-91fa009d-b58e-40f5-8f00-d07b72b8b795 container secret-env-test: STEP: delete the pod May 12 17:33:11.034: INFO: Waiting for pod pod-secrets-91fa009d-b58e-40f5-8f00-d07b72b8b795 to disappear May 12 17:33:11.074: INFO: Pod pod-secrets-91fa009d-b58e-40f5-8f00-d07b72b8b795 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:33:11.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2921" for this suite. • [SLOW TEST:7.138 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3393,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:33:11.082: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 12 17:33:11.664: INFO: Pod name pod-release: Found 0 pods out of 1 May 12 17:33:16.686: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:33:18.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2318" for this suite. • [SLOW TEST:7.798 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":207,"skipped":3406,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:33:18.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-f6b76686-11e9-431a-ac3a-44a23fffd245 in namespace container-probe-1173 May 12 17:33:25.530: INFO: Started pod busybox-f6b76686-11e9-431a-ac3a-44a23fffd245 in namespace container-probe-1173 STEP: checking the pod's current state and verifying that restartCount is present May 12 17:33:25.721: INFO: Initial restart count of pod busybox-f6b76686-11e9-431a-ac3a-44a23fffd245 is 0 May 12 17:34:12.514: INFO: Restart count of pod container-probe-1173/busybox-f6b76686-11e9-431a-ac3a-44a23fffd245 is now 1 (46.792411034s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:34:12.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1173" for this suite. • [SLOW TEST:53.974 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3409,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:34:12.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 12 17:34:13.285: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:13.287: INFO: Number of nodes with available pods: 0 May 12 17:34:13.287: INFO: Node jerma-worker is running more than one daemon pod May 12 17:34:14.292: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:14.296: INFO: Number of nodes with available pods: 0 May 12 17:34:14.296: INFO: Node jerma-worker is running more than one daemon pod May 12 17:34:15.740: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:15.800: INFO: Number of nodes with available pods: 0 May 12 17:34:15.800: INFO: Node jerma-worker is running more than one daemon pod May 12 17:34:16.292: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:16.295: INFO: Number of nodes with available pods: 0 May 12 17:34:16.295: INFO: Node jerma-worker is running more than one daemon pod May 12 17:34:17.292: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:17.295: INFO: Number of nodes with available pods: 0 May 12 17:34:17.295: INFO: Node jerma-worker is running more than one daemon pod May 12 17:34:18.395: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:18.400: INFO: Number of nodes with available pods: 0 May 12 17:34:18.400: INFO: Node jerma-worker is running more than one daemon pod May 12 17:34:19.791: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:19.796: INFO: Number of nodes with available pods: 0 May 12 17:34:19.796: INFO: Node jerma-worker is running more than one daemon pod May 12 17:34:20.301: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:20.338: INFO: Number of nodes with available pods: 0 May 12 17:34:20.338: INFO: Node jerma-worker is running more than one daemon pod May 12 17:34:21.291: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:21.293: INFO: Number of nodes with available pods: 2 May 12 17:34:21.293: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 12 17:34:21.327: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:21.330: INFO: Number of nodes with available pods: 1 May 12 17:34:21.330: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:34:22.334: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:22.337: INFO: Number of nodes with available pods: 1 May 12 17:34:22.337: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:34:23.335: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:23.338: INFO: Number of nodes with available pods: 1 May 12 17:34:23.338: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:34:24.336: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:24.339: INFO: Number of nodes with available pods: 1 May 12 17:34:24.339: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:34:25.335: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:25.339: INFO: Number of nodes with available pods: 1 May 12 17:34:25.339: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:34:26.336: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:26.339: INFO: Number of nodes with available pods: 1 May 12 17:34:26.339: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:34:27.335: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:27.370: INFO: Number of nodes with available pods: 1 May 12 17:34:27.370: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:34:28.412: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:28.416: INFO: Number of nodes with available pods: 1 May 12 17:34:28.416: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:34:29.334: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:29.338: INFO: Number of nodes with available pods: 1 May 12 17:34:29.338: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:34:30.387: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:30.393: INFO: Number of nodes with available pods: 1 May 12 17:34:30.393: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:34:31.334: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:34:31.336: INFO: Number of nodes with available pods: 2 May 12 17:34:31.336: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7241, will wait for the garbage collector to delete the pods May 12 17:34:31.395: INFO: Deleting DaemonSet.extensions daemon-set took: 4.887945ms May 12 17:34:31.696: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.233117ms May 12 17:34:39.332: INFO: Number of nodes with available pods: 0 May 12 17:34:39.332: INFO: Number of running nodes: 0, number of available pods: 0 May 12 17:34:39.335: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7241/daemonsets","resourceVersion":"15630170"},"items":null} May 12 17:34:39.336: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7241/pods","resourceVersion":"15630170"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:34:39.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7241" for this suite. • [SLOW TEST:26.494 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":209,"skipped":3415,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:34:39.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 12 17:34:39.682: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0fd1d1b1-0414-4851-b099-eb1443182b15" in namespace "projected-1632" to be "success or failure" May 12 17:34:39.711: INFO: Pod "downwardapi-volume-0fd1d1b1-0414-4851-b099-eb1443182b15": Phase="Pending", Reason="", readiness=false. Elapsed: 28.670238ms May 12 17:34:41.806: INFO: Pod "downwardapi-volume-0fd1d1b1-0414-4851-b099-eb1443182b15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124279449s May 12 17:34:43.818: INFO: Pod "downwardapi-volume-0fd1d1b1-0414-4851-b099-eb1443182b15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136338627s May 12 17:34:45.822: INFO: Pod "downwardapi-volume-0fd1d1b1-0414-4851-b099-eb1443182b15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.139937549s STEP: Saw pod success May 12 17:34:45.822: INFO: Pod "downwardapi-volume-0fd1d1b1-0414-4851-b099-eb1443182b15" satisfied condition "success or failure" May 12 17:34:45.825: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0fd1d1b1-0414-4851-b099-eb1443182b15 container client-container: STEP: delete the pod May 12 17:34:45.885: INFO: Waiting for pod downwardapi-volume-0fd1d1b1-0414-4851-b099-eb1443182b15 to disappear May 12 17:34:45.913: INFO: Pod downwardapi-volume-0fd1d1b1-0414-4851-b099-eb1443182b15 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:34:45.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1632" for this suite. • [SLOW TEST:6.569 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3415,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:34:45.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 12 17:35:00.295: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6688 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 17:35:00.295: INFO: >>> kubeConfig: /root/.kube/config I0512 17:35:00.325861 7 log.go:172] (0xc005cee6e0) (0xc00196c320) Create stream I0512 17:35:00.325892 7 log.go:172] (0xc005cee6e0) (0xc00196c320) Stream added, broadcasting: 1 I0512 17:35:00.327486 7 log.go:172] (0xc005cee6e0) Reply frame received for 1 I0512 17:35:00.327533 7 log.go:172] (0xc005cee6e0) (0xc001a846e0) Create stream I0512 17:35:00.327549 7 log.go:172] (0xc005cee6e0) (0xc001a846e0) Stream added, broadcasting: 3 I0512 17:35:00.328444 7 log.go:172] (0xc005cee6e0) Reply frame received for 3 I0512 17:35:00.328471 7 log.go:172] (0xc005cee6e0) (0xc001e41e00) Create stream I0512 17:35:00.328478 7 log.go:172] (0xc005cee6e0) (0xc001e41e00) Stream added, broadcasting: 5 I0512 17:35:00.329335 7 log.go:172] (0xc005cee6e0) Reply frame received for 5 I0512 17:35:00.377409 7 log.go:172] (0xc005cee6e0) Data frame received for 5 I0512 17:35:00.377436 7 log.go:172] (0xc001e41e00) (5) Data frame handling I0512 17:35:00.377455 7 log.go:172] (0xc005cee6e0) Data frame received for 3 I0512 17:35:00.377468 7 log.go:172] (0xc001a846e0) (3) Data frame handling I0512 17:35:00.377481 7 log.go:172] (0xc001a846e0) (3) Data frame sent I0512 17:35:00.377489 7 log.go:172] (0xc005cee6e0) Data frame received for 3 I0512 17:35:00.377499 7 log.go:172] (0xc001a846e0) (3) Data frame handling I0512 17:35:00.379078 7 log.go:172] (0xc005cee6e0) Data frame received for 1 I0512 17:35:00.379106 7 log.go:172] (0xc00196c320) (1) Data frame handling I0512 17:35:00.379126 7 log.go:172] (0xc00196c320) (1) Data frame sent I0512 17:35:00.379138 7 log.go:172] (0xc005cee6e0) (0xc00196c320) Stream removed, broadcasting: 1 I0512 17:35:00.379154 7 log.go:172] (0xc005cee6e0) Go away received I0512 17:35:00.379210 7 log.go:172] (0xc005cee6e0) (0xc00196c320) Stream removed, broadcasting: 1 I0512 17:35:00.379253 7 log.go:172] (0xc005cee6e0) (0xc001a846e0) Stream removed, broadcasting: 3 I0512 17:35:00.379282 7 log.go:172] (0xc005cee6e0) (0xc001e41e00) Stream removed, broadcasting: 5 May 12 17:35:00.379: INFO: Exec stderr: "" May 12 17:35:00.379: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6688 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 17:35:00.379: INFO: >>> kubeConfig: /root/.kube/config I0512 17:35:00.402885 7 log.go:172] (0xc005ceed10) (0xc00196c500) Create stream I0512 17:35:00.402913 7 log.go:172] (0xc005ceed10) (0xc00196c500) Stream added, broadcasting: 1 I0512 17:35:00.404225 7 log.go:172] (0xc005ceed10) Reply frame received for 1 I0512 17:35:00.404252 7 log.go:172] (0xc005ceed10) (0xc0028f0280) Create stream I0512 17:35:00.404266 7 log.go:172] (0xc005ceed10) (0xc0028f0280) Stream added, broadcasting: 3 I0512 17:35:00.405079 7 log.go:172] (0xc005ceed10) Reply frame received for 3 I0512 17:35:00.405255 7 log.go:172] (0xc005ceed10) (0xc00196c5a0) Create stream I0512 17:35:00.405276 7 log.go:172] (0xc005ceed10) (0xc00196c5a0) Stream added, broadcasting: 5 I0512 17:35:00.406044 7 log.go:172] (0xc005ceed10) Reply frame received for 5 I0512 17:35:00.454970 7 log.go:172] (0xc005ceed10) Data frame received for 5 I0512 17:35:00.455000 7 log.go:172] (0xc00196c5a0) (5) Data frame handling I0512 17:35:00.455029 7 log.go:172] (0xc005ceed10) Data frame received for 3 I0512 17:35:00.455044 7 log.go:172] (0xc0028f0280) (3) Data frame handling I0512 17:35:00.455059 7 log.go:172] (0xc0028f0280) (3) Data frame sent I0512 17:35:00.455069 7 log.go:172] (0xc005ceed10) Data frame received for 3 I0512 17:35:00.455081 7 log.go:172] (0xc0028f0280) (3) Data frame handling I0512 17:35:00.456494 7 log.go:172] (0xc005ceed10) Data frame received for 1 I0512 17:35:00.456526 7 log.go:172] (0xc00196c500) (1) Data frame handling I0512 17:35:00.456542 7 log.go:172] (0xc00196c500) (1) Data frame sent I0512 17:35:00.456566 7 log.go:172] (0xc005ceed10) (0xc00196c500) Stream removed, broadcasting: 1 I0512 17:35:00.456661 7 log.go:172] (0xc005ceed10) Go away received I0512 17:35:00.456707 7 log.go:172] (0xc005ceed10) (0xc00196c500) Stream removed, broadcasting: 1 I0512 17:35:00.456740 7 log.go:172] (0xc005ceed10) (0xc0028f0280) Stream removed, broadcasting: 3 I0512 17:35:00.456758 7 log.go:172] (0xc005ceed10) (0xc00196c5a0) Stream removed, broadcasting: 5 May 12 17:35:00.456: INFO: Exec stderr: "" May 12 17:35:00.456: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6688 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 17:35:00.456: INFO: >>> kubeConfig: /root/.kube/config I0512 17:35:00.483004 7 log.go:172] (0xc0024022c0) (0xc0011fc1e0) Create stream I0512 17:35:00.483050 7 log.go:172] (0xc0024022c0) (0xc0011fc1e0) Stream added, broadcasting: 1 I0512 17:35:00.487863 7 log.go:172] (0xc0024022c0) Reply frame received for 1 I0512 17:35:00.487906 7 log.go:172] (0xc0024022c0) (0xc0011fc780) Create stream I0512 17:35:00.487920 7 log.go:172] (0xc0024022c0) (0xc0011fc780) Stream added, broadcasting: 3 I0512 17:35:00.490230 7 log.go:172] (0xc0024022c0) Reply frame received for 3 I0512 17:35:00.490320 7 log.go:172] (0xc0024022c0) (0xc0026ee000) Create stream I0512 17:35:00.490351 7 log.go:172] (0xc0024022c0) (0xc0026ee000) Stream added, broadcasting: 5 I0512 17:35:00.492839 7 log.go:172] (0xc0024022c0) Reply frame received for 5 I0512 17:35:00.546121 7 log.go:172] (0xc0024022c0) Data frame received for 5 I0512 17:35:00.546146 7 log.go:172] (0xc0026ee000) (5) Data frame handling I0512 17:35:00.546163 7 log.go:172] (0xc0024022c0) Data frame received for 3 I0512 17:35:00.546171 7 log.go:172] (0xc0011fc780) (3) Data frame handling I0512 17:35:00.546180 7 log.go:172] (0xc0011fc780) (3) Data frame sent I0512 17:35:00.546188 7 log.go:172] (0xc0024022c0) Data frame received for 3 I0512 17:35:00.546194 7 log.go:172] (0xc0011fc780) (3) Data frame handling I0512 17:35:00.547176 7 log.go:172] (0xc0024022c0) Data frame received for 1 I0512 17:35:00.547196 7 log.go:172] (0xc0011fc1e0) (1) Data frame handling I0512 17:35:00.547209 7 log.go:172] (0xc0011fc1e0) (1) Data frame sent I0512 17:35:00.547222 7 log.go:172] (0xc0024022c0) (0xc0011fc1e0) Stream removed, broadcasting: 1 I0512 17:35:00.547244 7 log.go:172] (0xc0024022c0) Go away received I0512 17:35:00.547366 7 log.go:172] (0xc0024022c0) (0xc0011fc1e0) Stream removed, broadcasting: 1 I0512 17:35:00.547389 7 log.go:172] (0xc0024022c0) (0xc0011fc780) Stream removed, broadcasting: 3 I0512 17:35:00.547399 7 log.go:172] (0xc0024022c0) (0xc0026ee000) Stream removed, broadcasting: 5 May 12 17:35:00.547: INFO: Exec stderr: "" May 12 17:35:00.547: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6688 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 17:35:00.547: INFO: >>> kubeConfig: /root/.kube/config I0512 17:35:00.570088 7 log.go:172] (0xc002402b00) (0xc0011fc960) Create stream I0512 17:35:00.570152 7 log.go:172] (0xc002402b00) (0xc0011fc960) Stream added, broadcasting: 1 I0512 17:35:00.571645 7 log.go:172] (0xc002402b00) Reply frame received for 1 I0512 17:35:00.571680 7 log.go:172] (0xc002402b00) (0xc0026ee140) Create stream I0512 17:35:00.571694 7 log.go:172] (0xc002402b00) (0xc0026ee140) Stream added, broadcasting: 3 I0512 17:35:00.572491 7 log.go:172] (0xc002402b00) Reply frame received for 3 I0512 17:35:00.572537 7 log.go:172] (0xc002402b00) (0xc0028f0320) Create stream I0512 17:35:00.572551 7 log.go:172] (0xc002402b00) (0xc0028f0320) Stream added, broadcasting: 5 I0512 17:35:00.573659 7 log.go:172] (0xc002402b00) Reply frame received for 5 I0512 17:35:00.648584 7 log.go:172] (0xc002402b00) Data frame received for 3 I0512 17:35:00.648615 7 log.go:172] (0xc0026ee140) (3) Data frame handling I0512 17:35:00.648625 7 log.go:172] (0xc0026ee140) (3) Data frame sent I0512 17:35:00.648634 7 log.go:172] (0xc002402b00) Data frame received for 3 I0512 17:35:00.648647 7 log.go:172] (0xc0026ee140) (3) Data frame handling I0512 17:35:00.648691 7 log.go:172] (0xc002402b00) Data frame received for 5 I0512 17:35:00.648764 7 log.go:172] (0xc0028f0320) (5) Data frame handling I0512 17:35:00.650274 7 log.go:172] (0xc002402b00) Data frame received for 1 I0512 17:35:00.650333 7 log.go:172] (0xc0011fc960) (1) Data frame handling I0512 17:35:00.650356 7 log.go:172] (0xc0011fc960) (1) Data frame sent I0512 17:35:00.650369 7 log.go:172] (0xc002402b00) (0xc0011fc960) Stream removed, broadcasting: 1 I0512 17:35:00.650390 7 log.go:172] (0xc002402b00) Go away received I0512 17:35:00.650540 7 log.go:172] (0xc002402b00) (0xc0011fc960) Stream removed, broadcasting: 1 I0512 17:35:00.650571 7 log.go:172] (0xc002402b00) (0xc0026ee140) Stream removed, broadcasting: 3 I0512 17:35:00.650589 7 log.go:172] (0xc002402b00) (0xc0028f0320) Stream removed, broadcasting: 5 May 12 17:35:00.650: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 12 17:35:00.650: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6688 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 17:35:00.650: INFO: >>> kubeConfig: /root/.kube/config I0512 17:35:00.679022 7 log.go:172] (0xc0056bc000) (0xc001a84c80) Create stream I0512 17:35:00.679062 7 log.go:172] (0xc0056bc000) (0xc001a84c80) Stream added, broadcasting: 1 I0512 17:35:00.680325 7 log.go:172] (0xc0056bc000) Reply frame received for 1 I0512 17:35:00.680358 7 log.go:172] (0xc0056bc000) (0xc00196c8c0) Create stream I0512 17:35:00.680370 7 log.go:172] (0xc0056bc000) (0xc00196c8c0) Stream added, broadcasting: 3 I0512 17:35:00.681059 7 log.go:172] (0xc0056bc000) Reply frame received for 3 I0512 17:35:00.681086 7 log.go:172] (0xc0056bc000) (0xc001a84fa0) Create stream I0512 17:35:00.681099 7 log.go:172] (0xc0056bc000) (0xc001a84fa0) Stream added, broadcasting: 5 I0512 17:35:00.681874 7 log.go:172] (0xc0056bc000) Reply frame received for 5 I0512 17:35:00.744208 7 log.go:172] (0xc0056bc000) Data frame received for 5 I0512 17:35:00.744232 7 log.go:172] (0xc001a84fa0) (5) Data frame handling I0512 17:35:00.744265 7 log.go:172] (0xc0056bc000) Data frame received for 3 I0512 17:35:00.744282 7 log.go:172] (0xc00196c8c0) (3) Data frame handling I0512 17:35:00.744289 7 log.go:172] (0xc00196c8c0) (3) Data frame sent I0512 17:35:00.744513 7 log.go:172] (0xc0056bc000) Data frame received for 3 I0512 17:35:00.744544 7 log.go:172] (0xc00196c8c0) (3) Data frame handling I0512 17:35:00.746380 7 log.go:172] (0xc0056bc000) Data frame received for 1 I0512 17:35:00.746394 7 log.go:172] (0xc001a84c80) (1) Data frame handling I0512 17:35:00.746405 7 log.go:172] (0xc001a84c80) (1) Data frame sent I0512 17:35:00.746729 7 log.go:172] (0xc0056bc000) (0xc001a84c80) Stream removed, broadcasting: 1 I0512 17:35:00.746805 7 log.go:172] (0xc0056bc000) (0xc001a84c80) Stream removed, broadcasting: 1 I0512 17:35:00.746816 7 log.go:172] (0xc0056bc000) (0xc00196c8c0) Stream removed, broadcasting: 3 I0512 17:35:00.746827 7 log.go:172] (0xc0056bc000) (0xc001a84fa0) Stream removed, broadcasting: 5 May 12 17:35:00.746: INFO: Exec stderr: "" May 12 17:35:00.746: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6688 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 17:35:00.746: INFO: >>> kubeConfig: /root/.kube/config I0512 17:35:00.748583 7 log.go:172] (0xc0056bc000) Go away received I0512 17:35:00.776930 7 log.go:172] (0xc002402e70) (0xc0011fcc80) Create stream I0512 17:35:00.776963 7 log.go:172] (0xc002402e70) (0xc0011fcc80) Stream added, broadcasting: 1 I0512 17:35:00.778823 7 log.go:172] (0xc002402e70) Reply frame received for 1 I0512 17:35:00.778852 7 log.go:172] (0xc002402e70) (0xc0026ee3c0) Create stream I0512 17:35:00.778859 7 log.go:172] (0xc002402e70) (0xc0026ee3c0) Stream added, broadcasting: 3 I0512 17:35:00.779768 7 log.go:172] (0xc002402e70) Reply frame received for 3 I0512 17:35:00.779813 7 log.go:172] (0xc002402e70) (0xc0026ee460) Create stream I0512 17:35:00.779827 7 log.go:172] (0xc002402e70) (0xc0026ee460) Stream added, broadcasting: 5 I0512 17:35:00.780642 7 log.go:172] (0xc002402e70) Reply frame received for 5 I0512 17:35:00.844236 7 log.go:172] (0xc002402e70) Data frame received for 5 I0512 17:35:00.844271 7 log.go:172] (0xc0026ee460) (5) Data frame handling I0512 17:35:00.844294 7 log.go:172] (0xc002402e70) Data frame received for 3 I0512 17:35:00.844307 7 log.go:172] (0xc0026ee3c0) (3) Data frame handling I0512 17:35:00.844326 7 log.go:172] (0xc0026ee3c0) (3) Data frame sent I0512 17:35:00.844340 7 log.go:172] (0xc002402e70) Data frame received for 3 I0512 17:35:00.844352 7 log.go:172] (0xc0026ee3c0) (3) Data frame handling I0512 17:35:00.846156 7 log.go:172] (0xc002402e70) Data frame received for 1 I0512 17:35:00.846176 7 log.go:172] (0xc0011fcc80) (1) Data frame handling I0512 17:35:00.846191 7 log.go:172] (0xc0011fcc80) (1) Data frame sent I0512 17:35:00.846325 7 log.go:172] (0xc002402e70) (0xc0011fcc80) Stream removed, broadcasting: 1 I0512 17:35:00.846445 7 log.go:172] (0xc002402e70) (0xc0011fcc80) Stream removed, broadcasting: 1 I0512 17:35:00.846467 7 log.go:172] (0xc002402e70) (0xc0026ee3c0) Stream removed, broadcasting: 3 I0512 17:35:00.846487 7 log.go:172] (0xc002402e70) (0xc0026ee460) Stream removed, broadcasting: 5 May 12 17:35:00.846: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true I0512 17:35:00.846534 7 log.go:172] (0xc002402e70) Go away received May 12 17:35:00.846: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6688 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 17:35:00.846: INFO: >>> kubeConfig: /root/.kube/config I0512 17:35:00.878447 7 log.go:172] (0xc0056bc790) (0xc001a85180) Create stream I0512 17:35:00.878487 7 log.go:172] (0xc0056bc790) (0xc001a85180) Stream added, broadcasting: 1 I0512 17:35:00.880629 7 log.go:172] (0xc0056bc790) Reply frame received for 1 I0512 17:35:00.880682 7 log.go:172] (0xc0056bc790) (0xc00196caa0) Create stream I0512 17:35:00.880700 7 log.go:172] (0xc0056bc790) (0xc00196caa0) Stream added, broadcasting: 3 I0512 17:35:00.882265 7 log.go:172] (0xc0056bc790) Reply frame received for 3 I0512 17:35:00.882316 7 log.go:172] (0xc0056bc790) (0xc0011fce60) Create stream I0512 17:35:00.882335 7 log.go:172] (0xc0056bc790) (0xc0011fce60) Stream added, broadcasting: 5 I0512 17:35:00.883332 7 log.go:172] (0xc0056bc790) Reply frame received for 5 I0512 17:35:00.962939 7 log.go:172] (0xc0056bc790) Data frame received for 5 I0512 17:35:00.962977 7 log.go:172] (0xc0011fce60) (5) Data frame handling I0512 17:35:00.963003 7 log.go:172] (0xc0056bc790) Data frame received for 3 I0512 17:35:00.963031 7 log.go:172] (0xc00196caa0) (3) Data frame handling I0512 17:35:00.963059 7 log.go:172] (0xc00196caa0) (3) Data frame sent I0512 17:35:00.963070 7 log.go:172] (0xc0056bc790) Data frame received for 3 I0512 17:35:00.963080 7 log.go:172] (0xc00196caa0) (3) Data frame handling I0512 17:35:00.964242 7 log.go:172] (0xc0056bc790) Data frame received for 1 I0512 17:35:00.964284 7 log.go:172] (0xc001a85180) (1) Data frame handling I0512 17:35:00.964323 7 log.go:172] (0xc001a85180) (1) Data frame sent I0512 17:35:00.964352 7 log.go:172] (0xc0056bc790) (0xc001a85180) Stream removed, broadcasting: 1 I0512 17:35:00.964380 7 log.go:172] (0xc0056bc790) Go away received I0512 17:35:00.964463 7 log.go:172] (0xc0056bc790) (0xc001a85180) Stream removed, broadcasting: 1 I0512 17:35:00.964481 7 log.go:172] (0xc0056bc790) (0xc00196caa0) Stream removed, broadcasting: 3 I0512 17:35:00.964489 7 log.go:172] (0xc0056bc790) (0xc0011fce60) Stream removed, broadcasting: 5 May 12 17:35:00.964: INFO: Exec stderr: "" May 12 17:35:00.964: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6688 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 17:35:00.964: INFO: >>> kubeConfig: /root/.kube/config I0512 17:35:00.997813 7 log.go:172] (0xc002403550) (0xc0011fd4a0) Create stream I0512 17:35:00.997836 7 log.go:172] (0xc002403550) (0xc0011fd4a0) Stream added, broadcasting: 1 I0512 17:35:00.999530 7 log.go:172] (0xc002403550) Reply frame received for 1 I0512 17:35:00.999567 7 log.go:172] (0xc002403550) (0xc0026ee500) Create stream I0512 17:35:00.999581 7 log.go:172] (0xc002403550) (0xc0026ee500) Stream added, broadcasting: 3 I0512 17:35:01.000513 7 log.go:172] (0xc002403550) Reply frame received for 3 I0512 17:35:01.000544 7 log.go:172] (0xc002403550) (0xc001a85400) Create stream I0512 17:35:01.000555 7 log.go:172] (0xc002403550) (0xc001a85400) Stream added, broadcasting: 5 I0512 17:35:01.001531 7 log.go:172] (0xc002403550) Reply frame received for 5 I0512 17:35:01.059545 7 log.go:172] (0xc002403550) Data frame received for 5 I0512 17:35:01.059593 7 log.go:172] (0xc001a85400) (5) Data frame handling I0512 17:35:01.059616 7 log.go:172] (0xc002403550) Data frame received for 3 I0512 17:35:01.059624 7 log.go:172] (0xc0026ee500) (3) Data frame handling I0512 17:35:01.059634 7 log.go:172] (0xc0026ee500) (3) Data frame sent I0512 17:35:01.059642 7 log.go:172] (0xc002403550) Data frame received for 3 I0512 17:35:01.059649 7 log.go:172] (0xc0026ee500) (3) Data frame handling I0512 17:35:01.061705 7 log.go:172] (0xc002403550) Data frame received for 1 I0512 17:35:01.061739 7 log.go:172] (0xc0011fd4a0) (1) Data frame handling I0512 17:35:01.061779 7 log.go:172] (0xc0011fd4a0) (1) Data frame sent I0512 17:35:01.061828 7 log.go:172] (0xc002403550) (0xc0011fd4a0) Stream removed, broadcasting: 1 I0512 17:35:01.061876 7 log.go:172] (0xc002403550) Go away received I0512 17:35:01.061979 7 log.go:172] (0xc002403550) (0xc0011fd4a0) Stream removed, broadcasting: 1 I0512 17:35:01.062016 7 log.go:172] (0xc002403550) (0xc0026ee500) Stream removed, broadcasting: 3 I0512 17:35:01.062052 7 log.go:172] (0xc002403550) (0xc001a85400) Stream removed, broadcasting: 5 May 12 17:35:01.062: INFO: Exec stderr: "" May 12 17:35:01.062: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6688 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 17:35:01.062: INFO: >>> kubeConfig: /root/.kube/config I0512 17:35:01.090164 7 log.go:172] (0xc001e28160) (0xc0026ee820) Create stream I0512 17:35:01.090203 7 log.go:172] (0xc001e28160) (0xc0026ee820) Stream added, broadcasting: 1 I0512 17:35:01.092276 7 log.go:172] (0xc001e28160) Reply frame received for 1 I0512 17:35:01.092307 7 log.go:172] (0xc001e28160) (0xc0026ee960) Create stream I0512 17:35:01.092318 7 log.go:172] (0xc001e28160) (0xc0026ee960) Stream added, broadcasting: 3 I0512 17:35:01.093323 7 log.go:172] (0xc001e28160) Reply frame received for 3 I0512 17:35:01.093358 7 log.go:172] (0xc001e28160) (0xc001a854a0) Create stream I0512 17:35:01.093374 7 log.go:172] (0xc001e28160) (0xc001a854a0) Stream added, broadcasting: 5 I0512 17:35:01.094275 7 log.go:172] (0xc001e28160) Reply frame received for 5 I0512 17:35:01.157031 7 log.go:172] (0xc001e28160) Data frame received for 3 I0512 17:35:01.157063 7 log.go:172] (0xc0026ee960) (3) Data frame handling I0512 17:35:01.157072 7 log.go:172] (0xc0026ee960) (3) Data frame sent I0512 17:35:01.157077 7 log.go:172] (0xc001e28160) Data frame received for 3 I0512 17:35:01.157082 7 log.go:172] (0xc0026ee960) (3) Data frame handling I0512 17:35:01.157103 7 log.go:172] (0xc001e28160) Data frame received for 5 I0512 17:35:01.157226 7 log.go:172] (0xc001a854a0) (5) Data frame handling I0512 17:35:01.158926 7 log.go:172] (0xc001e28160) Data frame received for 1 I0512 17:35:01.158940 7 log.go:172] (0xc0026ee820) (1) Data frame handling I0512 17:35:01.158947 7 log.go:172] (0xc0026ee820) (1) Data frame sent I0512 17:35:01.158955 7 log.go:172] (0xc001e28160) (0xc0026ee820) Stream removed, broadcasting: 1 I0512 17:35:01.159051 7 log.go:172] (0xc001e28160) (0xc0026ee820) Stream removed, broadcasting: 1 I0512 17:35:01.159066 7 log.go:172] (0xc001e28160) (0xc0026ee960) Stream removed, broadcasting: 3 I0512 17:35:01.159196 7 log.go:172] (0xc001e28160) (0xc001a854a0) Stream removed, broadcasting: 5 I0512 17:35:01.159279 7 log.go:172] (0xc001e28160) Go away received May 12 17:35:01.159: INFO: Exec stderr: "" May 12 17:35:01.159: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6688 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 17:35:01.159: INFO: >>> kubeConfig: /root/.kube/config I0512 17:35:01.189750 7 log.go:172] (0xc005cef6b0) (0xc00196cc80) Create stream I0512 17:35:01.189784 7 log.go:172] (0xc005cef6b0) (0xc00196cc80) Stream added, broadcasting: 1 I0512 17:35:01.191666 7 log.go:172] (0xc005cef6b0) Reply frame received for 1 I0512 17:35:01.191710 7 log.go:172] (0xc005cef6b0) (0xc0011fd860) Create stream I0512 17:35:01.191725 7 log.go:172] (0xc005cef6b0) (0xc0011fd860) Stream added, broadcasting: 3 I0512 17:35:01.192759 7 log.go:172] (0xc005cef6b0) Reply frame received for 3 I0512 17:35:01.192801 7 log.go:172] (0xc005cef6b0) (0xc001a85540) Create stream I0512 17:35:01.192820 7 log.go:172] (0xc005cef6b0) (0xc001a85540) Stream added, broadcasting: 5 I0512 17:35:01.194058 7 log.go:172] (0xc005cef6b0) Reply frame received for 5 I0512 17:35:01.244032 7 log.go:172] (0xc005cef6b0) Data frame received for 3 I0512 17:35:01.244056 7 log.go:172] (0xc0011fd860) (3) Data frame handling I0512 17:35:01.244067 7 log.go:172] (0xc0011fd860) (3) Data frame sent I0512 17:35:01.244072 7 log.go:172] (0xc005cef6b0) Data frame received for 3 I0512 17:35:01.244076 7 log.go:172] (0xc0011fd860) (3) Data frame handling I0512 17:35:01.244234 7 log.go:172] (0xc005cef6b0) Data frame received for 5 I0512 17:35:01.244268 7 log.go:172] (0xc001a85540) (5) Data frame handling I0512 17:35:01.246343 7 log.go:172] (0xc005cef6b0) Data frame received for 1 I0512 17:35:01.246361 7 log.go:172] (0xc00196cc80) (1) Data frame handling I0512 17:35:01.246373 7 log.go:172] (0xc00196cc80) (1) Data frame sent I0512 17:35:01.246544 7 log.go:172] (0xc005cef6b0) (0xc00196cc80) Stream removed, broadcasting: 1 I0512 17:35:01.246632 7 log.go:172] (0xc005cef6b0) (0xc00196cc80) Stream removed, broadcasting: 1 I0512 17:35:01.246652 7 log.go:172] (0xc005cef6b0) (0xc0011fd860) Stream removed, broadcasting: 3 I0512 17:35:01.246660 7 log.go:172] (0xc005cef6b0) (0xc001a85540) Stream removed, broadcasting: 5 May 12 17:35:01.246: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:35:01.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready I0512 17:35:01.246964 7 log.go:172] (0xc005cef6b0) Go away received STEP: Destroying namespace "e2e-kubelet-etc-hosts-6688" for this suite. • [SLOW TEST:15.335 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3472,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:35:01.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 17:35:02.016: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 17:35:04.024: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901702, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901702, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901702, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901702, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:35:06.027: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901702, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901702, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901702, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724901702, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 17:35:09.136: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 12 17:35:09.158: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:35:09.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8339" for this suite. STEP: Destroying namespace "webhook-8339-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.021 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":212,"skipped":3474,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:35:09.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 12 17:35:09.348: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd9a7d89-d291-420d-ae58-ad98c8d7a9ee" in namespace "downward-api-8838" to be "success or failure" May 12 17:35:09.351: INFO: Pod "downwardapi-volume-dd9a7d89-d291-420d-ae58-ad98c8d7a9ee": Phase="Pending", Reason="", readiness=false. Elapsed: 3.038838ms May 12 17:35:11.478: INFO: Pod "downwardapi-volume-dd9a7d89-d291-420d-ae58-ad98c8d7a9ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129988285s May 12 17:35:13.597: INFO: Pod "downwardapi-volume-dd9a7d89-d291-420d-ae58-ad98c8d7a9ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.249640577s May 12 17:35:15.647: INFO: Pod "downwardapi-volume-dd9a7d89-d291-420d-ae58-ad98c8d7a9ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.299315844s STEP: Saw pod success May 12 17:35:15.647: INFO: Pod "downwardapi-volume-dd9a7d89-d291-420d-ae58-ad98c8d7a9ee" satisfied condition "success or failure" May 12 17:35:15.652: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-dd9a7d89-d291-420d-ae58-ad98c8d7a9ee container client-container: STEP: delete the pod May 12 17:35:15.816: INFO: Waiting for pod downwardapi-volume-dd9a7d89-d291-420d-ae58-ad98c8d7a9ee to disappear May 12 17:35:15.843: INFO: Pod downwardapi-volume-dd9a7d89-d291-420d-ae58-ad98c8d7a9ee no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:35:15.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8838" for this suite. • [SLOW TEST:6.656 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3479,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:35:15.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 12 17:35:16.407: INFO: Waiting up to 5m0s for pod "pod-33fe0a24-12dc-46bc-ace1-607c716a7ae1" in namespace "emptydir-7988" to be "success or failure" May 12 17:35:16.562: INFO: Pod "pod-33fe0a24-12dc-46bc-ace1-607c716a7ae1": Phase="Pending", Reason="", readiness=false. Elapsed: 154.98347ms May 12 17:35:18.564: INFO: Pod "pod-33fe0a24-12dc-46bc-ace1-607c716a7ae1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.157553583s May 12 17:35:20.569: INFO: Pod "pod-33fe0a24-12dc-46bc-ace1-607c716a7ae1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161998827s May 12 17:35:22.572: INFO: Pod "pod-33fe0a24-12dc-46bc-ace1-607c716a7ae1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.164945197s STEP: Saw pod success May 12 17:35:22.572: INFO: Pod "pod-33fe0a24-12dc-46bc-ace1-607c716a7ae1" satisfied condition "success or failure" May 12 17:35:22.574: INFO: Trying to get logs from node jerma-worker pod pod-33fe0a24-12dc-46bc-ace1-607c716a7ae1 container test-container: STEP: delete the pod May 12 17:35:22.615: INFO: Waiting for pod pod-33fe0a24-12dc-46bc-ace1-607c716a7ae1 to disappear May 12 17:35:22.622: INFO: Pod pod-33fe0a24-12dc-46bc-ace1-607c716a7ae1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:35:22.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7988" for this suite. • [SLOW TEST:6.696 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3528,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:35:22.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6037 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 12 17:35:22.916: INFO: Found 0 stateful pods, waiting for 3 May 12 17:35:32.971: INFO: Found 2 stateful pods, waiting for 3 May 12 17:35:42.987: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 17:35:42.987: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 17:35:42.987: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 12 17:35:52.940: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 17:35:52.940: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 17:35:52.940: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 12 17:35:53.092: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 12 17:36:03.158: INFO: Updating stateful set ss2 May 12 17:36:04.198: INFO: Waiting for Pod statefulset-6037/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 12 17:36:14.542: INFO: Waiting for Pod statefulset-6037/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 12 17:36:25.351: INFO: Found 2 stateful pods, waiting for 3 May 12 17:36:35.355: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 12 17:36:35.355: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 12 17:36:35.355: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 12 17:36:35.375: INFO: Updating stateful set ss2 May 12 17:36:35.403: INFO: Waiting for Pod statefulset-6037/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 12 17:36:45.522: INFO: Waiting for Pod statefulset-6037/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 12 17:36:55.427: INFO: Updating stateful set ss2 May 12 17:36:55.515: INFO: Waiting for StatefulSet statefulset-6037/ss2 to complete update May 12 17:36:55.516: INFO: Waiting for Pod statefulset-6037/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 12 17:37:05.786: INFO: Waiting for StatefulSet statefulset-6037/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 12 17:37:16.142: INFO: Deleting all statefulset in ns statefulset-6037 May 12 17:37:16.773: INFO: Scaling statefulset ss2 to 0 May 12 17:37:47.441: INFO: Waiting for statefulset status.replicas updated to 0 May 12 17:37:47.443: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:37:47.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6037" for this suite. • [SLOW TEST:144.874 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":215,"skipped":3562,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:37:47.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-bd9efc6e-759c-4666-a35a-005414d036ce STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-bd9efc6e-759c-4666-a35a-005414d036ce STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:39:21.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6540" for this suite. • [SLOW TEST:93.986 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3566,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:39:21.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-5819ba31-cc09-47a7-82a6-e340eaa0757a STEP: Creating a pod to test consume secrets May 12 17:39:21.900: INFO: Waiting up to 5m0s for pod "pod-secrets-e205a7bb-6eed-4092-8f6b-620dbdfc1844" in namespace "secrets-5452" to be "success or failure" May 12 17:39:22.091: INFO: Pod "pod-secrets-e205a7bb-6eed-4092-8f6b-620dbdfc1844": Phase="Pending", Reason="", readiness=false. Elapsed: 190.650212ms May 12 17:39:24.315: INFO: Pod "pod-secrets-e205a7bb-6eed-4092-8f6b-620dbdfc1844": Phase="Pending", Reason="", readiness=false. Elapsed: 2.415249148s May 12 17:39:26.762: INFO: Pod "pod-secrets-e205a7bb-6eed-4092-8f6b-620dbdfc1844": Phase="Pending", Reason="", readiness=false. Elapsed: 4.862138976s May 12 17:39:28.822: INFO: Pod "pod-secrets-e205a7bb-6eed-4092-8f6b-620dbdfc1844": Phase="Pending", Reason="", readiness=false. Elapsed: 6.922033724s May 12 17:39:30.876: INFO: Pod "pod-secrets-e205a7bb-6eed-4092-8f6b-620dbdfc1844": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.975894568s STEP: Saw pod success May 12 17:39:30.876: INFO: Pod "pod-secrets-e205a7bb-6eed-4092-8f6b-620dbdfc1844" satisfied condition "success or failure" May 12 17:39:30.878: INFO: Trying to get logs from node jerma-worker pod pod-secrets-e205a7bb-6eed-4092-8f6b-620dbdfc1844 container secret-volume-test: STEP: delete the pod May 12 17:39:30.948: INFO: Waiting for pod pod-secrets-e205a7bb-6eed-4092-8f6b-620dbdfc1844 to disappear May 12 17:39:31.134: INFO: Pod pod-secrets-e205a7bb-6eed-4092-8f6b-620dbdfc1844 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:39:31.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5452" for this suite. • [SLOW TEST:9.657 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":217,"skipped":3612,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:39:31.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 12 17:39:31.629: INFO: namespace kubectl-4163 May 12 17:39:31.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4163' May 12 17:39:32.218: INFO: stderr: "" May 12 17:39:32.218: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 12 17:39:33.222: INFO: Selector matched 1 pods for map[app:agnhost] May 12 17:39:33.222: INFO: Found 0 / 1 May 12 17:39:34.247: INFO: Selector matched 1 pods for map[app:agnhost] May 12 17:39:34.247: INFO: Found 0 / 1 May 12 17:39:35.221: INFO: Selector matched 1 pods for map[app:agnhost] May 12 17:39:35.221: INFO: Found 1 / 1 May 12 17:39:35.221: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 12 17:39:35.223: INFO: Selector matched 1 pods for map[app:agnhost] May 12 17:39:35.223: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 12 17:39:35.223: INFO: wait on agnhost-master startup in kubectl-4163 May 12 17:39:35.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-z9czq agnhost-master --namespace=kubectl-4163' May 12 17:39:35.328: INFO: stderr: "" May 12 17:39:35.328: INFO: stdout: "Paused\n" STEP: exposing RC May 12 17:39:35.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4163' May 12 17:39:35.459: INFO: stderr: "" May 12 17:39:35.459: INFO: stdout: "service/rm2 exposed\n" May 12 17:39:35.499: INFO: Service rm2 in namespace kubectl-4163 found. STEP: exposing service May 12 17:39:37.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4163' May 12 17:39:37.651: INFO: stderr: "" May 12 17:39:37.651: INFO: stdout: "service/rm3 exposed\n" May 12 17:39:37.668: INFO: Service rm3 in namespace kubectl-4163 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:39:39.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4163" for this suite. • [SLOW TEST:8.532 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":218,"skipped":3638,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:39:39.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:39:40.337: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 12 17:39:40.558: INFO: Number of nodes with available pods: 0 May 12 17:39:40.558: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 12 17:39:41.258: INFO: Number of nodes with available pods: 0 May 12 17:39:41.258: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:39:42.715: INFO: Number of nodes with available pods: 0 May 12 17:39:42.715: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:39:43.262: INFO: Number of nodes with available pods: 0 May 12 17:39:43.262: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:39:44.410: INFO: Number of nodes with available pods: 0 May 12 17:39:44.410: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:39:45.490: INFO: Number of nodes with available pods: 0 May 12 17:39:45.490: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:39:46.900: INFO: Number of nodes with available pods: 0 May 12 17:39:46.900: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:39:47.565: INFO: Number of nodes with available pods: 0 May 12 17:39:47.565: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:39:49.083: INFO: Number of nodes with available pods: 0 May 12 17:39:49.083: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:39:49.566: INFO: Number of nodes with available pods: 0 May 12 17:39:49.566: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:39:50.266: INFO: Number of nodes with available pods: 0 May 12 17:39:50.266: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:39:51.398: INFO: Number of nodes with available pods: 0 May 12 17:39:51.398: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:39:53.191: INFO: Number of nodes with available pods: 0 May 12 17:39:53.191: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:39:53.817: INFO: Number of nodes with available pods: 1 May 12 17:39:53.817: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 12 17:39:54.656: INFO: Number of nodes with available pods: 1 May 12 17:39:54.656: INFO: Number of running nodes: 0, number of available pods: 1 May 12 17:39:55.746: INFO: Number of nodes with available pods: 0 May 12 17:39:55.746: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 12 17:39:56.531: INFO: Number of nodes with available pods: 0 May 12 17:39:56.531: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:39:57.651: INFO: Number of nodes with available pods: 0 May 12 17:39:57.651: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:39:58.650: INFO: Number of nodes with available pods: 0 May 12 17:39:58.650: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:39:59.565: INFO: Number of nodes with available pods: 0 May 12 17:39:59.565: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:40:00.710: INFO: Number of nodes with available pods: 0 May 12 17:40:00.710: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:40:01.535: INFO: Number of nodes with available pods: 0 May 12 17:40:01.536: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:40:02.534: INFO: Number of nodes with available pods: 0 May 12 17:40:02.534: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:40:03.831: INFO: Number of nodes with available pods: 0 May 12 17:40:03.831: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:40:04.936: INFO: Number of nodes with available pods: 0 May 12 17:40:04.936: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:40:05.682: INFO: Number of nodes with available pods: 0 May 12 17:40:05.682: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:40:06.535: INFO: Number of nodes with available pods: 0 May 12 17:40:06.535: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:40:07.534: INFO: Number of nodes with available pods: 0 May 12 17:40:07.534: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:40:08.625: INFO: Number of nodes with available pods: 0 May 12 17:40:08.625: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:40:09.878: INFO: Number of nodes with available pods: 0 May 12 17:40:09.878: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:40:10.793: INFO: Number of nodes with available pods: 0 May 12 17:40:10.793: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:40:11.534: INFO: Number of nodes with available pods: 0 May 12 17:40:11.534: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:40:12.713: INFO: Number of nodes with available pods: 0 May 12 17:40:12.713: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:40:13.534: INFO: Number of nodes with available pods: 0 May 12 17:40:13.534: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:40:14.534: INFO: Number of nodes with available pods: 0 May 12 17:40:14.534: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:40:15.567: INFO: Number of nodes with available pods: 1 May 12 17:40:15.567: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8526, will wait for the garbage collector to delete the pods May 12 17:40:15.780: INFO: Deleting DaemonSet.extensions daemon-set took: 6.261345ms May 12 17:40:16.080: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.238381ms May 12 17:40:19.904: INFO: Number of nodes with available pods: 0 May 12 17:40:19.904: INFO: Number of running nodes: 0, number of available pods: 0 May 12 17:40:19.906: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8526/daemonsets","resourceVersion":"15631709"},"items":null} May 12 17:40:19.907: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8526/pods","resourceVersion":"15631709"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:40:19.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8526" for this suite. • [SLOW TEST:40.260 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":219,"skipped":3645,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:40:19.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 12 17:40:20.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-686' May 12 17:40:20.253: INFO: stderr: "" May 12 17:40:20.253: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 May 12 17:40:20.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-686' May 12 17:40:29.308: INFO: stderr: "" May 12 17:40:29.308: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:40:29.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-686" for this suite. • [SLOW TEST:9.585 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":220,"skipped":3649,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:40:29.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:40:30.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6954' May 12 17:40:30.708: INFO: stderr: "" May 12 17:40:30.708: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 12 17:40:30.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6954' May 12 17:40:31.149: INFO: stderr: "" May 12 17:40:31.149: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 12 17:40:32.152: INFO: Selector matched 1 pods for map[app:agnhost] May 12 17:40:32.152: INFO: Found 0 / 1 May 12 17:40:33.172: INFO: Selector matched 1 pods for map[app:agnhost] May 12 17:40:33.173: INFO: Found 0 / 1 May 12 17:40:34.153: INFO: Selector matched 1 pods for map[app:agnhost] May 12 17:40:34.153: INFO: Found 0 / 1 May 12 17:40:35.153: INFO: Selector matched 1 pods for map[app:agnhost] May 12 17:40:35.153: INFO: Found 1 / 1 May 12 17:40:35.153: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 12 17:40:35.156: INFO: Selector matched 1 pods for map[app:agnhost] May 12 17:40:35.156: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 12 17:40:35.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-5xzfj --namespace=kubectl-6954' May 12 17:40:35.284: INFO: stderr: "" May 12 17:40:35.284: INFO: stdout: "Name: agnhost-master-5xzfj\nNamespace: kubectl-6954\nPriority: 0\nNode: jerma-worker/172.17.0.10\nStart Time: Tue, 12 May 2020 17:40:30 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.204\nIPs:\n IP: 10.244.1.204\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://92fb8fee0b3a33846fc25cb239117e1be67e68f889792621b2e87f030c8d800e\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 12 May 2020 17:40:34 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-fwhbl (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-fwhbl:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-fwhbl\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-6954/agnhost-master-5xzfj to jerma-worker\n Normal Pulled 3s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker Started container agnhost-master\n" May 12 17:40:35.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-6954' May 12 17:40:35.424: INFO: stderr: "" May 12 17:40:35.424: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6954\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-5xzfj\n" May 12 17:40:35.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-6954' May 12 17:40:35.538: INFO: stderr: "" May 12 17:40:35.538: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-6954\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.110.85.215\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.204:6379\nSession Affinity: None\nEvents: \n" May 12 17:40:35.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' May 12 17:40:35.667: INFO: stderr: "" May 12 17:40:35.667: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Tue, 12 May 2020 17:40:34 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 12 May 2020 17:39:39 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 12 May 2020 17:39:39 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 12 May 2020 17:39:39 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 12 May 2020 17:39:39 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 57d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 57d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 57d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 57d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 57d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 12 17:40:35.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-6954' May 12 17:40:35.761: INFO: stderr: "" May 12 17:40:35.761: INFO: stdout: "Name: kubectl-6954\nLabels: e2e-framework=kubectl\n e2e-run=9c81ead3-a3ef-410f-8702-87048a93e1d6\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:40:35.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6954" for this suite. • [SLOW TEST:6.242 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":221,"skipped":3666,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:40:35.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 12 17:40:35.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-4407' May 12 17:40:35.994: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 17:40:35.994: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 May 12 17:40:38.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-4407' May 12 17:40:38.251: INFO: stderr: "" May 12 17:40:38.251: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:40:38.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4407" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":222,"skipped":3668,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:40:38.259: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:40:38.660: INFO: Creating deployment "webserver-deployment" May 12 17:40:38.665: INFO: Waiting for observed generation 1 May 12 17:40:40.872: INFO: Waiting for all required pods to come up May 12 17:40:40.878: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 12 17:40:58.922: INFO: Waiting for deployment "webserver-deployment" to complete May 12 17:40:58.928: INFO: Updating deployment "webserver-deployment" with a non-existent image May 12 17:40:58.934: INFO: Updating deployment webserver-deployment May 12 17:40:58.934: INFO: Waiting for observed generation 2 May 12 17:41:01.124: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 12 17:41:01.127: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 12 17:41:01.131: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 12 17:41:01.390: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 12 17:41:01.390: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 12 17:41:01.716: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 12 17:41:01.723: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 12 17:41:01.723: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 12 17:41:01.729: INFO: Updating deployment webserver-deployment May 12 17:41:01.729: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 12 17:41:01.995: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 12 17:41:02.530: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 12 17:41:03.100: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9888 /apis/apps/v1/namespaces/deployment-9888/deployments/webserver-deployment 236bb830-675b-4653-90af-7e6d269b01b4 15632139 3 2020-05-12 17:40:38 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005c44658 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-05-12 17:41:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:38 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-12 17:41:01 +0000 UTC,LastTransitionTime:2020-05-12 17:41:01 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 12 17:41:03.278: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-9888 /apis/apps/v1/namespaces/deployment-9888/replicasets/webserver-deployment-c7997dcc8 56b045a2-8f31-4abf-9beb-fd4c47f2cabd 15632162 3 2020-05-12 17:40:58 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 236bb830-675b-4653-90af-7e6d269b01b4 0xc005c44b37 0xc005c44b38}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005c44ba8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 12 17:41:03.278: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 12 17:41:03.278: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-9888 /apis/apps/v1/namespaces/deployment-9888/replicasets/webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 15632159 3 2020-05-12 17:40:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 236bb830-675b-4653-90af-7e6d269b01b4 0xc005c44a77 0xc005c44a78}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005c44ad8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 12 17:41:03.377: INFO: Pod "webserver-deployment-595b5b9587-2q6hd" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2q6hd webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-2q6hd 01a275a3-2d17-4f26-bcb7-d875acc98c8f 15631945 0 2020-05-12 17:40:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b16167 0xc003b16168}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.206,StartTime:2020-05-12 17:40:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 17:40:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f812de1f1f316acd27b9b3d87cbddc9f82102bf9f4430178ffbe6114b7ad6e81,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.206,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.377: INFO: Pod "webserver-deployment-595b5b9587-5kcjx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5kcjx webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-5kcjx f3335244-1a72-4600-9e76-5b0cb5f6a2ae 15632114 0 2020-05-12 17:41:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b162e7 0xc003b162e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.377: INFO: Pod "webserver-deployment-595b5b9587-6ljz2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6ljz2 webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-6ljz2 e52303ed-9e00-4ab0-b9f4-5527602bbae7 15632156 0 2020-05-12 17:41:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b16407 0xc003b16408}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:01 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-12 17:41:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.378: INFO: Pod "webserver-deployment-595b5b9587-84frx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-84frx webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-84frx c2c382f6-442b-4bde-bb29-c3e11ce4e0d4 15632145 0 2020-05-12 17:41:02 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b16577 0xc003b16578}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.378: INFO: Pod "webserver-deployment-595b5b9587-8q2tj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8q2tj webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-8q2tj 6420c09b-10e2-4d91-ad4d-718c8eaab34a 15632147 0 2020-05-12 17:41:02 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b16697 0xc003b16698}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.378: INFO: Pod "webserver-deployment-595b5b9587-9jh2h" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9jh2h webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-9jh2h 2e51c65e-e103-4d56-9872-0021c51350b0 15631977 0 2020-05-12 17:40:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b167b7 0xc003b167b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.76,StartTime:2020-05-12 17:40:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 17:40:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bbd1bc24778e6c785271e1bb0f046958b2a39dcdb16e0d7b48e8e5e3b05bca60,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.76,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.378: INFO: Pod "webserver-deployment-595b5b9587-cmlcz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cmlcz webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-cmlcz 7e0c5aab-0bb4-4342-a46b-a67f8712e04e 15632144 0 2020-05-12 17:41:02 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b16937 0xc003b16938}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.378: INFO: Pod "webserver-deployment-595b5b9587-dzj8x" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dzj8x webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-dzj8x f98553e8-dac0-4f9b-b7aa-5b85f20e9ba2 15632136 0 2020-05-12 17:41:02 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b16a57 0xc003b16a58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.379: INFO: Pod "webserver-deployment-595b5b9587-fq7hz" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fq7hz webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-fq7hz 7a182919-4a76-4391-ab80-9089046d3e92 15632020 0 2020-05-12 17:40:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b16b77 0xc003b16b78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.78,StartTime:2020-05-12 17:40:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 17:40:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d5953c1629eef1e64bdf72306a9c36d511db6920fe2b89188b04762a75a1961f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.78,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.379: INFO: Pod "webserver-deployment-595b5b9587-g79zq" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g79zq webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-g79zq 218da2da-b09e-4e66-b3ab-85e2f6f5e3f1 15632028 0 2020-05-12 17:40:39 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b16cf7 0xc003b16cf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.80,StartTime:2020-05-12 17:40:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 17:40:55 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f96f5fadcc74cbed6c0d820cf21f9303473b05dff6e1b8c6a011ca4665a95490,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.80,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.379: INFO: Pod "webserver-deployment-595b5b9587-glppp" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-glppp webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-glppp 717c88f1-2a45-4426-842b-05ddba33b8a4 15632143 0 2020-05-12 17:41:02 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b16e77 0xc003b16e78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.379: INFO: Pod "webserver-deployment-595b5b9587-mgpnx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mgpnx webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-mgpnx 82f71ace-fcba-4e62-a196-fddc053ffb07 15632116 0 2020-05-12 17:41:01 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b16f97 0xc003b16f98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.379: INFO: Pod "webserver-deployment-595b5b9587-mpxrz" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mpxrz webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-mpxrz 621dce6b-b162-4b15-a812-f3510f0bdfbc 15632012 0 2020-05-12 17:40:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b170b7 0xc003b170b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.79,StartTime:2020-05-12 17:40:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 17:40:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f7880a646259ff31efdc6a624a2c1ef603a9b720980a6207b91959c32654053c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.79,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.380: INFO: Pod "webserver-deployment-595b5b9587-nwmdw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nwmdw webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-nwmdw 27b8373d-8835-42cc-a70a-172a387b46c8 15632146 0 2020-05-12 17:41:02 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b17237 0xc003b17238}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.380: INFO: Pod "webserver-deployment-595b5b9587-r76d4" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-r76d4 webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-r76d4 43483907-59b2-4cff-bad3-8729a1129af3 15632009 0 2020-05-12 17:40:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b17357 0xc003b17358}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.208,StartTime:2020-05-12 17:40:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 17:40:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5c3860e03716dad0f66bccdd126e338c7ee6e5e5cde6b3ce3ff07c4bb20f3fc9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.208,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.380: INFO: Pod "webserver-deployment-595b5b9587-sfxk2" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sfxk2 webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-sfxk2 58cf6458-ca86-45f9-a1dc-8e7a460329b1 15631986 0 2020-05-12 17:40:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b174d7 0xc003b174d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.77,StartTime:2020-05-12 17:40:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 17:40:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5de731dbf3ddddc9648f75688b93f884f7fd19d2a88b1ad3db74e41e4c172e32,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.77,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.380: INFO: Pod "webserver-deployment-595b5b9587-vnlxf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vnlxf webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-vnlxf a063b4b4-8929-42ee-afc1-e720401d8ad2 15632125 0 2020-05-12 17:41:02 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b17657 0xc003b17658}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.380: INFO: Pod "webserver-deployment-595b5b9587-wjxhd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wjxhd webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-wjxhd 02d487b0-8bd4-46a1-99f6-2b791dfed425 15632133 0 2020-05-12 17:41:02 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b17777 0xc003b17778}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.380: INFO: Pod "webserver-deployment-595b5b9587-zqbvg" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zqbvg webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-zqbvg b0d98d3c-e5c2-4130-8632-f5735ab2efba 15632025 0 2020-05-12 17:40:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b17897 0xc003b17898}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.207,StartTime:2020-05-12 17:40:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 17:40:52 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8a7505e93c644c7ac6e29aae9e0c10b062039db370c06c3e6d3499d05dfea224,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.207,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.380: INFO: Pod "webserver-deployment-595b5b9587-zttzf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zttzf webserver-deployment-595b5b9587- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-595b5b9587-zttzf a71df324-2536-426f-a7c9-38f6962e74a8 15632138 0 2020-05-12 17:41:02 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 e300aeec-aa63-407c-b029-809c05b4271a 0xc003b17a17 0xc003b17a18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.381: INFO: Pod "webserver-deployment-c7997dcc8-4gr8x" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4gr8x webserver-deployment-c7997dcc8- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-c7997dcc8-4gr8x f66253ff-7372-4cf0-b27b-b482bd58e6cc 15632170 0 2020-05-12 17:40:59 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 56b045a2-8f31-4abf-9beb-fd4c47f2cabd 0xc003b17b37 0xc003b17b38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.211,StartTime:2020-05-12 17:40:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.211,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.381: INFO: Pod "webserver-deployment-c7997dcc8-7d52j" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7d52j webserver-deployment-c7997dcc8- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-c7997dcc8-7d52j fc3fa6a5-b1c0-44ec-9fa3-6365ccd85697 15632095 0 2020-05-12 17:40:59 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 56b045a2-8f31-4abf-9beb-fd4c47f2cabd 0xc003b17ce7 0xc003b17ce8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-12 17:41:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.381: INFO: Pod "webserver-deployment-c7997dcc8-7mc5s" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7mc5s webserver-deployment-c7997dcc8- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-c7997dcc8-7mc5s ca6c8859-601d-4dd1-8ddb-a0960e936cd9 15632158 0 2020-05-12 17:41:02 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 56b045a2-8f31-4abf-9beb-fd4c47f2cabd 0xc003b17e67 0xc003b17e68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.381: INFO: Pod "webserver-deployment-c7997dcc8-gj7sc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gj7sc webserver-deployment-c7997dcc8- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-c7997dcc8-gj7sc 27f6e7b7-542f-4771-a43f-ddfbaaf7915c 15632149 0 2020-05-12 17:41:02 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 56b045a2-8f31-4abf-9beb-fd4c47f2cabd 0xc003b17f97 0xc003b17f98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.381: INFO: Pod "webserver-deployment-c7997dcc8-grbp7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-grbp7 webserver-deployment-c7997dcc8- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-c7997dcc8-grbp7 ff4e9211-5f83-4f52-a4ba-91ef79c19e39 15632066 0 2020-05-12 17:40:59 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 56b045a2-8f31-4abf-9beb-fd4c47f2cabd 0xc000a48737 0xc000a48738}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-12 17:40:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.382: INFO: Pod "webserver-deployment-c7997dcc8-hqh9l" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hqh9l webserver-deployment-c7997dcc8- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-c7997dcc8-hqh9l 5e9306bf-2a70-4ad8-80da-7ffdf419f716 15632079 0 2020-05-12 17:40:59 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 56b045a2-8f31-4abf-9beb-fd4c47f2cabd 0xc000a488c7 0xc000a488c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-12 17:40:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.382: INFO: Pod "webserver-deployment-c7997dcc8-lqtnf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lqtnf webserver-deployment-c7997dcc8- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-c7997dcc8-lqtnf 5bc06278-9c5c-42ed-ae67-5c35cf2a5a15 15632148 0 2020-05-12 17:41:02 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 56b045a2-8f31-4abf-9beb-fd4c47f2cabd 0xc000a48a57 0xc000a48a58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.382: INFO: Pod "webserver-deployment-c7997dcc8-lrhz2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lrhz2 webserver-deployment-c7997dcc8- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-c7997dcc8-lrhz2 afef1ec6-0b98-4b29-b8cb-bdc8078538f4 15632150 0 2020-05-12 17:41:02 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 56b045a2-8f31-4abf-9beb-fd4c47f2cabd 0xc000a48b87 0xc000a48b88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.382: INFO: Pod "webserver-deployment-c7997dcc8-qt925" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qt925 webserver-deployment-c7997dcc8- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-c7997dcc8-qt925 1d4ba924-4f74-4371-9944-a8785c4f6a24 15632165 0 2020-05-12 17:41:01 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 56b045a2-8f31-4abf-9beb-fd4c47f2cabd 0xc000a48cb7 0xc000a48cb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-12 17:41:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.382: INFO: Pod "webserver-deployment-c7997dcc8-rkx9j" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rkx9j webserver-deployment-c7997dcc8- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-c7997dcc8-rkx9j 3515e032-2c5e-4aff-aff7-2137344c5c05 15632137 0 2020-05-12 17:41:02 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 56b045a2-8f31-4abf-9beb-fd4c47f2cabd 0xc000a48e37 0xc000a48e38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.382: INFO: Pod "webserver-deployment-c7997dcc8-vbjll" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vbjll webserver-deployment-c7997dcc8- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-c7997dcc8-vbjll 197c29e5-add5-4f9c-90b5-137f2bb9f345 15632151 0 2020-05-12 17:41:02 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 56b045a2-8f31-4abf-9beb-fd4c47f2cabd 0xc000a48f67 0xc000a48f68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.383: INFO: Pod "webserver-deployment-c7997dcc8-xlxlj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xlxlj webserver-deployment-c7997dcc8- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-c7997dcc8-xlxlj 541a8522-4482-4c85-bb6f-7005a0c65a00 15632100 0 2020-05-12 17:40:59 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 56b045a2-8f31-4abf-9beb-fd4c47f2cabd 0xc000a490a7 0xc000a490a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:00 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:40:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-12 17:41:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 12 17:41:03.383: INFO: Pod "webserver-deployment-c7997dcc8-zp22g" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zp22g webserver-deployment-c7997dcc8- deployment-9888 /api/v1/namespaces/deployment-9888/pods/webserver-deployment-c7997dcc8-zp22g a08380c9-b22b-4692-a552-9de251adfba8 15632132 0 2020-05-12 17:41:02 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 56b045a2-8f31-4abf-9beb-fd4c47f2cabd 0xc000a49237 0xc000a49238}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-45snq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-45snq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-45snq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:41:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:41:03.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9888" for this suite. • [SLOW TEST:25.389 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":223,"skipped":3687,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:41:03.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 12 17:41:04.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9287' May 12 17:41:05.351: INFO: stderr: "" May 12 17:41:05.351: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 12 17:41:05.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9287' May 12 17:41:05.656: INFO: stderr: "" May 12 17:41:05.656: INFO: stdout: "" STEP: Replicas for name=update-demo: expected=2 actual=0 May 12 17:41:10.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9287' May 12 17:41:10.845: INFO: stderr: "" May 12 17:41:10.845: INFO: stdout: "update-demo-nautilus-6kks9 update-demo-nautilus-ld48s " May 12 17:41:10.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6kks9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9287' May 12 17:41:11.160: INFO: stderr: "" May 12 17:41:11.160: INFO: stdout: "" May 12 17:41:11.160: INFO: update-demo-nautilus-6kks9 is created but not running May 12 17:41:16.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9287' May 12 17:41:16.276: INFO: stderr: "" May 12 17:41:16.276: INFO: stdout: "update-demo-nautilus-6kks9 update-demo-nautilus-ld48s " May 12 17:41:16.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6kks9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9287' May 12 17:41:17.386: INFO: stderr: "" May 12 17:41:17.386: INFO: stdout: "" May 12 17:41:17.386: INFO: update-demo-nautilus-6kks9 is created but not running May 12 17:41:22.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9287' May 12 17:41:22.915: INFO: stderr: "" May 12 17:41:22.915: INFO: stdout: "update-demo-nautilus-6kks9 update-demo-nautilus-ld48s " May 12 17:41:22.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6kks9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9287' May 12 17:41:23.194: INFO: stderr: "" May 12 17:41:23.194: INFO: stdout: "" May 12 17:41:23.194: INFO: update-demo-nautilus-6kks9 is created but not running May 12 17:41:28.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9287' May 12 17:41:29.210: INFO: stderr: "" May 12 17:41:29.210: INFO: stdout: "update-demo-nautilus-6kks9 update-demo-nautilus-ld48s " May 12 17:41:29.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6kks9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9287' May 12 17:41:29.905: INFO: stderr: "" May 12 17:41:29.905: INFO: stdout: "" May 12 17:41:29.905: INFO: update-demo-nautilus-6kks9 is created but not running May 12 17:41:34.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9287' May 12 17:41:35.203: INFO: stderr: "" May 12 17:41:35.203: INFO: stdout: "update-demo-nautilus-6kks9 update-demo-nautilus-ld48s " May 12 17:41:35.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6kks9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9287' May 12 17:41:35.906: INFO: stderr: "" May 12 17:41:35.907: INFO: stdout: "true" May 12 17:41:35.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6kks9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9287' May 12 17:41:36.231: INFO: stderr: "" May 12 17:41:36.231: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 17:41:36.231: INFO: validating pod update-demo-nautilus-6kks9 May 12 17:41:36.694: INFO: got data: { "image": "nautilus.jpg" } May 12 17:41:36.694: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 17:41:36.694: INFO: update-demo-nautilus-6kks9 is verified up and running May 12 17:41:36.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ld48s -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9287' May 12 17:41:36.992: INFO: stderr: "" May 12 17:41:36.992: INFO: stdout: "true" May 12 17:41:36.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ld48s -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9287' May 12 17:41:37.544: INFO: stderr: "" May 12 17:41:37.544: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 12 17:41:37.544: INFO: validating pod update-demo-nautilus-ld48s May 12 17:41:37.787: INFO: got data: { "image": "nautilus.jpg" } May 12 17:41:37.787: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 12 17:41:37.787: INFO: update-demo-nautilus-ld48s is verified up and running STEP: using delete to clean up resources May 12 17:41:37.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9287' May 12 17:41:38.206: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 12 17:41:38.206: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 12 17:41:38.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9287' May 12 17:41:38.511: INFO: stderr: "No resources found in kubectl-9287 namespace.\n" May 12 17:41:38.511: INFO: stdout: "" May 12 17:41:38.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9287 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 12 17:41:38.951: INFO: stderr: "" May 12 17:41:38.951: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:41:38.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9287" for this suite. • [SLOW TEST:35.809 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":224,"skipped":3756,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:41:39.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 17:41:43.724: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 17:41:46.194: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902103, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902103, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902104, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902103, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:41:48.208: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902103, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902103, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902104, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902103, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:41:50.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902103, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902103, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902104, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902103, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 17:41:53.651: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 12 17:42:00.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-9542 to-be-attached-pod -i -c=container1' May 12 17:42:01.119: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:42:01.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9542" for this suite. STEP: Destroying namespace "webhook-9542-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:22.127 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":225,"skipped":3756,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:42:01.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 12 17:42:03.636: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 12 17:42:05.646: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902124, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902124, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902124, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902123, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:42:08.208: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902124, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902124, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902124, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902123, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:42:09.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902124, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902124, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902124, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902123, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:42:11.650: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902124, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902124, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902124, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902123, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 17:42:14.892: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:42:14.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:42:17.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9921" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:16.030 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":226,"skipped":3760,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:42:17.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs May 12 17:42:18.238: INFO: Waiting up to 5m0s for pod "pod-22a73c32-e8ec-41f1-99da-d78008b7191b" in namespace "emptydir-8029" to be "success or failure" May 12 17:42:18.256: INFO: Pod "pod-22a73c32-e8ec-41f1-99da-d78008b7191b": Phase="Pending", Reason="", readiness=false. Elapsed: 18.490772ms May 12 17:42:20.345: INFO: Pod "pod-22a73c32-e8ec-41f1-99da-d78008b7191b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107264156s May 12 17:42:22.370: INFO: Pod "pod-22a73c32-e8ec-41f1-99da-d78008b7191b": Phase="Running", Reason="", readiness=true. Elapsed: 4.13265191s May 12 17:42:24.374: INFO: Pod "pod-22a73c32-e8ec-41f1-99da-d78008b7191b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.136203708s STEP: Saw pod success May 12 17:42:24.374: INFO: Pod "pod-22a73c32-e8ec-41f1-99da-d78008b7191b" satisfied condition "success or failure" May 12 17:42:24.376: INFO: Trying to get logs from node jerma-worker pod pod-22a73c32-e8ec-41f1-99da-d78008b7191b container test-container: STEP: delete the pod May 12 17:42:24.460: INFO: Waiting for pod pod-22a73c32-e8ec-41f1-99da-d78008b7191b to disappear May 12 17:42:24.578: INFO: Pod pod-22a73c32-e8ec-41f1-99da-d78008b7191b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:42:24.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8029" for this suite. • [SLOW TEST:6.995 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3762,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:42:24.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:42:25.235: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 12 17:42:25.287: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:25.465: INFO: Number of nodes with available pods: 0 May 12 17:42:25.465: INFO: Node jerma-worker is running more than one daemon pod May 12 17:42:26.469: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:26.472: INFO: Number of nodes with available pods: 0 May 12 17:42:26.472: INFO: Node jerma-worker is running more than one daemon pod May 12 17:42:27.626: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:27.629: INFO: Number of nodes with available pods: 0 May 12 17:42:27.629: INFO: Node jerma-worker is running more than one daemon pod May 12 17:42:28.802: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:28.805: INFO: Number of nodes with available pods: 0 May 12 17:42:28.805: INFO: Node jerma-worker is running more than one daemon pod May 12 17:42:29.753: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:29.757: INFO: Number of nodes with available pods: 0 May 12 17:42:29.757: INFO: Node jerma-worker is running more than one daemon pod May 12 17:42:30.663: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:30.665: INFO: Number of nodes with available pods: 0 May 12 17:42:30.665: INFO: Node jerma-worker is running more than one daemon pod May 12 17:42:31.862: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:31.866: INFO: Number of nodes with available pods: 1 May 12 17:42:31.866: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:42:32.603: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:32.638: INFO: Number of nodes with available pods: 1 May 12 17:42:32.638: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:42:33.903: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:33.907: INFO: Number of nodes with available pods: 2 May 12 17:42:33.907: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 12 17:42:35.066: INFO: Wrong image for pod: daemon-set-fjfdz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:35.066: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:35.071: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:36.100: INFO: Wrong image for pod: daemon-set-fjfdz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:36.100: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:36.104: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:37.076: INFO: Wrong image for pod: daemon-set-fjfdz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:37.076: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:37.080: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:38.196: INFO: Wrong image for pod: daemon-set-fjfdz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:38.196: INFO: Pod daemon-set-fjfdz is not available May 12 17:42:38.196: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:38.201: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:39.075: INFO: Wrong image for pod: daemon-set-fjfdz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:39.075: INFO: Pod daemon-set-fjfdz is not available May 12 17:42:39.075: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:39.079: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:40.311: INFO: Wrong image for pod: daemon-set-fjfdz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:40.311: INFO: Pod daemon-set-fjfdz is not available May 12 17:42:40.311: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:40.316: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:41.074: INFO: Wrong image for pod: daemon-set-fjfdz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:41.074: INFO: Pod daemon-set-fjfdz is not available May 12 17:42:41.074: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:41.078: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:42.075: INFO: Wrong image for pod: daemon-set-fjfdz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:42.075: INFO: Pod daemon-set-fjfdz is not available May 12 17:42:42.075: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:42.078: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:43.210: INFO: Wrong image for pod: daemon-set-fjfdz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:43.211: INFO: Pod daemon-set-fjfdz is not available May 12 17:42:43.211: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:43.215: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:44.075: INFO: Wrong image for pod: daemon-set-fjfdz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:44.075: INFO: Pod daemon-set-fjfdz is not available May 12 17:42:44.075: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:44.079: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:45.074: INFO: Wrong image for pod: daemon-set-fjfdz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:45.074: INFO: Pod daemon-set-fjfdz is not available May 12 17:42:45.074: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:45.076: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:46.074: INFO: Wrong image for pod: daemon-set-fjfdz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:46.074: INFO: Pod daemon-set-fjfdz is not available May 12 17:42:46.074: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:46.077: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:47.076: INFO: Wrong image for pod: daemon-set-fjfdz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:47.076: INFO: Pod daemon-set-fjfdz is not available May 12 17:42:47.076: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:47.078: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:48.077: INFO: Wrong image for pod: daemon-set-fjfdz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:48.077: INFO: Pod daemon-set-fjfdz is not available May 12 17:42:48.077: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:48.080: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:49.166: INFO: Wrong image for pod: daemon-set-fjfdz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:49.166: INFO: Pod daemon-set-fjfdz is not available May 12 17:42:49.166: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:49.169: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:50.075: INFO: Pod daemon-set-llr26 is not available May 12 17:42:50.075: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:50.079: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:51.075: INFO: Pod daemon-set-llr26 is not available May 12 17:42:51.075: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:51.079: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:52.077: INFO: Pod daemon-set-llr26 is not available May 12 17:42:52.077: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:52.102: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:53.075: INFO: Pod daemon-set-llr26 is not available May 12 17:42:53.075: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:53.080: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:54.131: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:54.217: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:55.076: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:55.081: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:56.077: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:56.077: INFO: Pod daemon-set-mttqn is not available May 12 17:42:56.080: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:57.075: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:57.075: INFO: Pod daemon-set-mttqn is not available May 12 17:42:57.079: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:58.075: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:58.075: INFO: Pod daemon-set-mttqn is not available May 12 17:42:58.079: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:42:59.074: INFO: Wrong image for pod: daemon-set-mttqn. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 12 17:42:59.075: INFO: Pod daemon-set-mttqn is not available May 12 17:42:59.077: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:43:00.154: INFO: Pod daemon-set-ctrx6 is not available May 12 17:43:00.157: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 12 17:43:00.160: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:43:00.162: INFO: Number of nodes with available pods: 1 May 12 17:43:00.162: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:43:01.166: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:43:01.168: INFO: Number of nodes with available pods: 1 May 12 17:43:01.168: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:43:02.167: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:43:02.171: INFO: Number of nodes with available pods: 1 May 12 17:43:02.171: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:43:03.167: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:43:03.171: INFO: Number of nodes with available pods: 1 May 12 17:43:03.171: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:43:04.166: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:43:04.226: INFO: Number of nodes with available pods: 1 May 12 17:43:04.226: INFO: Node jerma-worker2 is running more than one daemon pod May 12 17:43:05.180: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:43:05.196: INFO: Number of nodes with available pods: 2 May 12 17:43:05.196: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4339, will wait for the garbage collector to delete the pods May 12 17:43:05.269: INFO: Deleting DaemonSet.extensions daemon-set took: 6.502272ms May 12 17:43:05.669: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.217474ms May 12 17:43:19.884: INFO: Number of nodes with available pods: 0 May 12 17:43:19.884: INFO: Number of running nodes: 0, number of available pods: 0 May 12 17:43:19.887: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4339/daemonsets","resourceVersion":"15633114"},"items":null} May 12 17:43:19.889: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4339/pods","resourceVersion":"15633114"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:43:19.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4339" for this suite. • [SLOW TEST:55.288 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":228,"skipped":3791,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:43:19.899: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 12 17:43:27.202: INFO: Successfully updated pod "labelsupdatebb47ff82-57f9-427f-a465-8995bcd4f63a" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:43:30.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3603" for this suite. • [SLOW TEST:10.653 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3813,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:43:30.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 12 17:43:31.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-8825' May 12 17:43:45.386: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 17:43:45.386: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc May 12 17:43:45.540: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-tczh5] May 12 17:43:45.540: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-tczh5" in namespace "kubectl-8825" to be "running and ready" May 12 17:43:45.558: INFO: Pod "e2e-test-httpd-rc-tczh5": Phase="Pending", Reason="", readiness=false. Elapsed: 18.184839ms May 12 17:43:47.678: INFO: Pod "e2e-test-httpd-rc-tczh5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137734065s May 12 17:43:49.958: INFO: Pod "e2e-test-httpd-rc-tczh5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.417512511s May 12 17:43:51.961: INFO: Pod "e2e-test-httpd-rc-tczh5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.420672866s May 12 17:43:54.114: INFO: Pod "e2e-test-httpd-rc-tczh5": Phase="Running", Reason="", readiness=true. Elapsed: 8.573410144s May 12 17:43:54.114: INFO: Pod "e2e-test-httpd-rc-tczh5" satisfied condition "running and ready" May 12 17:43:54.114: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-tczh5] May 12 17:43:54.114: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-8825' May 12 17:43:54.280: INFO: stderr: "" May 12 17:43:54.280: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.99. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.99. Set the 'ServerName' directive globally to suppress this message\n[Tue May 12 17:43:51.778741 2020] [mpm_event:notice] [pid 1:tid 139700076829544] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Tue May 12 17:43:51.778799 2020] [core:notice] [pid 1:tid 139700076829544] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 May 12 17:43:54.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-8825' May 12 17:43:54.820: INFO: stderr: "" May 12 17:43:54.820: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:43:54.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8825" for this suite. • [SLOW TEST:25.048 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1521 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":230,"skipped":3824,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:43:55.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 17:43:59.426: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 17:44:01.845: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902240, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902240, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902240, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902239, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:44:03.937: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902240, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902240, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902240, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902239, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:44:05.906: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902240, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902240, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902240, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902239, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 17:44:09.214: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:44:09.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9948" for this suite. STEP: Destroying namespace "webhook-9948-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.001 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":231,"skipped":3833,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:44:09.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 12 17:44:28.330: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 17:44:29.397: INFO: Pod pod-with-poststart-exec-hook still exists May 12 17:44:31.397: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 17:44:31.461: INFO: Pod pod-with-poststart-exec-hook still exists May 12 17:44:33.398: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 17:44:33.479: INFO: Pod pod-with-poststart-exec-hook still exists May 12 17:44:35.397: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 12 17:44:35.401: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:44:35.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8" for this suite. • [SLOW TEST:25.807 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3848,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:44:35.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:44:47.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4005" for this suite. • [SLOW TEST:12.286 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":233,"skipped":3876,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:44:47.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-7457f381-3385-4509-9b0b-bade884c8fcc STEP: Creating a pod to test consume configMaps May 12 17:44:48.886: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-be5f548c-2e2d-462d-b3fb-d2327587200d" in namespace "projected-70" to be "success or failure" May 12 17:44:49.139: INFO: Pod "pod-projected-configmaps-be5f548c-2e2d-462d-b3fb-d2327587200d": Phase="Pending", Reason="", readiness=false. Elapsed: 252.455062ms May 12 17:44:51.144: INFO: Pod "pod-projected-configmaps-be5f548c-2e2d-462d-b3fb-d2327587200d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.257456328s May 12 17:44:53.156: INFO: Pod "pod-projected-configmaps-be5f548c-2e2d-462d-b3fb-d2327587200d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.269451279s May 12 17:44:55.160: INFO: Pod "pod-projected-configmaps-be5f548c-2e2d-462d-b3fb-d2327587200d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.273759023s STEP: Saw pod success May 12 17:44:55.160: INFO: Pod "pod-projected-configmaps-be5f548c-2e2d-462d-b3fb-d2327587200d" satisfied condition "success or failure" May 12 17:44:55.163: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-be5f548c-2e2d-462d-b3fb-d2327587200d container projected-configmap-volume-test: STEP: delete the pod May 12 17:44:55.226: INFO: Waiting for pod pod-projected-configmaps-be5f548c-2e2d-462d-b3fb-d2327587200d to disappear May 12 17:44:55.330: INFO: Pod pod-projected-configmaps-be5f548c-2e2d-462d-b3fb-d2327587200d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:44:55.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-70" for this suite. • [SLOW TEST:7.642 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3891,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:44:55.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-jws2 STEP: Creating a pod to test atomic-volume-subpath May 12 17:44:55.728: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jws2" in namespace "subpath-3177" to be "success or failure" May 12 17:44:55.739: INFO: Pod "pod-subpath-test-configmap-jws2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.606452ms May 12 17:44:57.743: INFO: Pod "pod-subpath-test-configmap-jws2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014266538s May 12 17:44:59.947: INFO: Pod "pod-subpath-test-configmap-jws2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.218108689s May 12 17:45:02.234: INFO: Pod "pod-subpath-test-configmap-jws2": Phase="Running", Reason="", readiness=true. Elapsed: 6.505289633s May 12 17:45:04.342: INFO: Pod "pod-subpath-test-configmap-jws2": Phase="Running", Reason="", readiness=true. Elapsed: 8.613213564s May 12 17:45:06.345: INFO: Pod "pod-subpath-test-configmap-jws2": Phase="Running", Reason="", readiness=true. Elapsed: 10.616017976s May 12 17:45:08.425: INFO: Pod "pod-subpath-test-configmap-jws2": Phase="Running", Reason="", readiness=true. Elapsed: 12.696435548s May 12 17:45:10.635: INFO: Pod "pod-subpath-test-configmap-jws2": Phase="Running", Reason="", readiness=true. Elapsed: 14.906906648s May 12 17:45:12.639: INFO: Pod "pod-subpath-test-configmap-jws2": Phase="Running", Reason="", readiness=true. Elapsed: 16.910634939s May 12 17:45:14.643: INFO: Pod "pod-subpath-test-configmap-jws2": Phase="Running", Reason="", readiness=true. Elapsed: 18.91451349s May 12 17:45:16.647: INFO: Pod "pod-subpath-test-configmap-jws2": Phase="Running", Reason="", readiness=true. Elapsed: 20.918486056s May 12 17:45:18.650: INFO: Pod "pod-subpath-test-configmap-jws2": Phase="Running", Reason="", readiness=true. Elapsed: 22.921305928s May 12 17:45:20.791: INFO: Pod "pod-subpath-test-configmap-jws2": Phase="Running", Reason="", readiness=true. Elapsed: 25.062186032s May 12 17:45:22.929: INFO: Pod "pod-subpath-test-configmap-jws2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 27.200606855s STEP: Saw pod success May 12 17:45:22.929: INFO: Pod "pod-subpath-test-configmap-jws2" satisfied condition "success or failure" May 12 17:45:22.932: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-jws2 container test-container-subpath-configmap-jws2: STEP: delete the pod May 12 17:45:23.140: INFO: Waiting for pod pod-subpath-test-configmap-jws2 to disappear May 12 17:45:23.402: INFO: Pod pod-subpath-test-configmap-jws2 no longer exists STEP: Deleting pod pod-subpath-test-configmap-jws2 May 12 17:45:23.402: INFO: Deleting pod "pod-subpath-test-configmap-jws2" in namespace "subpath-3177" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:45:23.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3177" for this suite. • [SLOW TEST:28.085 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":235,"skipped":3892,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:45:23.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 12 17:45:24.197: INFO: Created pod &Pod{ObjectMeta:{dns-6619 dns-6619 /api/v1/namespaces/dns-6619/pods/dns-6619 a043af09-8830-41fc-8305-3e8bfe7d5bbf 15633702 0 2020-05-12 17:45:24 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ngvlp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ngvlp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ngvlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... May 12 17:45:30.384: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6619 PodName:dns-6619 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 17:45:30.384: INFO: >>> kubeConfig: /root/.kube/config I0512 17:45:30.808972 7 log.go:172] (0xc002184630) (0xc001da1400) Create stream I0512 17:45:30.809001 7 log.go:172] (0xc002184630) (0xc001da1400) Stream added, broadcasting: 1 I0512 17:45:30.810730 7 log.go:172] (0xc002184630) Reply frame received for 1 I0512 17:45:30.810761 7 log.go:172] (0xc002184630) (0xc001da1540) Create stream I0512 17:45:30.810771 7 log.go:172] (0xc002184630) (0xc001da1540) Stream added, broadcasting: 3 I0512 17:45:30.811477 7 log.go:172] (0xc002184630) Reply frame received for 3 I0512 17:45:30.811502 7 log.go:172] (0xc002184630) (0xc0026ee3c0) Create stream I0512 17:45:30.811512 7 log.go:172] (0xc002184630) (0xc0026ee3c0) Stream added, broadcasting: 5 I0512 17:45:30.818557 7 log.go:172] (0xc002184630) Reply frame received for 5 I0512 17:45:30.883771 7 log.go:172] (0xc002184630) Data frame received for 3 I0512 17:45:30.883794 7 log.go:172] (0xc001da1540) (3) Data frame handling I0512 17:45:30.883807 7 log.go:172] (0xc001da1540) (3) Data frame sent I0512 17:45:30.884801 7 log.go:172] (0xc002184630) Data frame received for 5 I0512 17:45:30.884828 7 log.go:172] (0xc0026ee3c0) (5) Data frame handling I0512 17:45:30.885044 7 log.go:172] (0xc002184630) Data frame received for 3 I0512 17:45:30.885076 7 log.go:172] (0xc001da1540) (3) Data frame handling I0512 17:45:30.886307 7 log.go:172] (0xc002184630) Data frame received for 1 I0512 17:45:30.886371 7 log.go:172] (0xc001da1400) (1) Data frame handling I0512 17:45:30.886421 7 log.go:172] (0xc001da1400) (1) Data frame sent I0512 17:45:30.886450 7 log.go:172] (0xc002184630) (0xc001da1400) Stream removed, broadcasting: 1 I0512 17:45:30.886481 7 log.go:172] (0xc002184630) Go away received I0512 17:45:30.886554 7 log.go:172] (0xc002184630) (0xc001da1400) Stream removed, broadcasting: 1 I0512 17:45:30.886578 7 log.go:172] (0xc002184630) (0xc001da1540) Stream removed, broadcasting: 3 I0512 17:45:30.886601 7 log.go:172] (0xc002184630) (0xc0026ee3c0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 12 17:45:30.886: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6619 PodName:dns-6619 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 12 17:45:30.886: INFO: >>> kubeConfig: /root/.kube/config I0512 17:45:30.961818 7 log.go:172] (0xc002184c60) (0xc001da17c0) Create stream I0512 17:45:30.961840 7 log.go:172] (0xc002184c60) (0xc001da17c0) Stream added, broadcasting: 1 I0512 17:45:30.962846 7 log.go:172] (0xc002184c60) Reply frame received for 1 I0512 17:45:30.962871 7 log.go:172] (0xc002184c60) (0xc001402fa0) Create stream I0512 17:45:30.962881 7 log.go:172] (0xc002184c60) (0xc001402fa0) Stream added, broadcasting: 3 I0512 17:45:30.963443 7 log.go:172] (0xc002184c60) Reply frame received for 3 I0512 17:45:30.963461 7 log.go:172] (0xc002184c60) (0xc0026ee460) Create stream I0512 17:45:30.963469 7 log.go:172] (0xc002184c60) (0xc0026ee460) Stream added, broadcasting: 5 I0512 17:45:30.964041 7 log.go:172] (0xc002184c60) Reply frame received for 5 I0512 17:45:31.028235 7 log.go:172] (0xc002184c60) Data frame received for 3 I0512 17:45:31.028258 7 log.go:172] (0xc001402fa0) (3) Data frame handling I0512 17:45:31.028270 7 log.go:172] (0xc001402fa0) (3) Data frame sent I0512 17:45:31.028868 7 log.go:172] (0xc002184c60) Data frame received for 3 I0512 17:45:31.028899 7 log.go:172] (0xc001402fa0) (3) Data frame handling I0512 17:45:31.028920 7 log.go:172] (0xc002184c60) Data frame received for 5 I0512 17:45:31.028935 7 log.go:172] (0xc0026ee460) (5) Data frame handling I0512 17:45:31.030237 7 log.go:172] (0xc002184c60) Data frame received for 1 I0512 17:45:31.030268 7 log.go:172] (0xc001da17c0) (1) Data frame handling I0512 17:45:31.030287 7 log.go:172] (0xc001da17c0) (1) Data frame sent I0512 17:45:31.030339 7 log.go:172] (0xc002184c60) (0xc001da17c0) Stream removed, broadcasting: 1 I0512 17:45:31.030412 7 log.go:172] (0xc002184c60) Go away received I0512 17:45:31.030460 7 log.go:172] (0xc002184c60) (0xc001da17c0) Stream removed, broadcasting: 1 I0512 17:45:31.030485 7 log.go:172] (0xc002184c60) (0xc001402fa0) Stream removed, broadcasting: 3 I0512 17:45:31.030499 7 log.go:172] (0xc002184c60) (0xc0026ee460) Stream removed, broadcasting: 5 May 12 17:45:31.030: INFO: Deleting pod dns-6619... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:45:31.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6619" for this suite. • [SLOW TEST:8.991 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":236,"skipped":3905,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:45:32.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-d73b739d-95ec-46e8-ae26-d8ea5194368a in namespace container-probe-6343 May 12 17:45:45.828: INFO: Started pod liveness-d73b739d-95ec-46e8-ae26-d8ea5194368a in namespace container-probe-6343 STEP: checking the pod's current state and verifying that restartCount is present May 12 17:45:45.831: INFO: Initial restart count of pod liveness-d73b739d-95ec-46e8-ae26-d8ea5194368a is 0 May 12 17:46:09.247: INFO: Restart count of pod container-probe-6343/liveness-d73b739d-95ec-46e8-ae26-d8ea5194368a is now 1 (23.415764595s elapsed) May 12 17:46:24.172: INFO: Restart count of pod container-probe-6343/liveness-d73b739d-95ec-46e8-ae26-d8ea5194368a is now 2 (38.341279964s elapsed) May 12 17:46:48.135: INFO: Restart count of pod container-probe-6343/liveness-d73b739d-95ec-46e8-ae26-d8ea5194368a is now 3 (1m2.304104285s elapsed) May 12 17:47:02.864: INFO: Restart count of pod container-probe-6343/liveness-d73b739d-95ec-46e8-ae26-d8ea5194368a is now 4 (1m17.033378035s elapsed) May 12 17:48:07.275: INFO: Restart count of pod container-probe-6343/liveness-d73b739d-95ec-46e8-ae26-d8ea5194368a is now 5 (2m21.444131013s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:48:07.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6343" for this suite. • [SLOW TEST:155.037 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3912,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:48:07.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-e39d6447-5189-4ec8-b671-ecbbb2236743 STEP: Creating a pod to test consume secrets May 12 17:48:07.985: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5e985bc5-8a86-4b32-8765-fa6164db4422" in namespace "projected-6135" to be "success or failure" May 12 17:48:08.080: INFO: Pod "pod-projected-secrets-5e985bc5-8a86-4b32-8765-fa6164db4422": Phase="Pending", Reason="", readiness=false. Elapsed: 95.057183ms May 12 17:48:10.084: INFO: Pod "pod-projected-secrets-5e985bc5-8a86-4b32-8765-fa6164db4422": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09855451s May 12 17:48:12.255: INFO: Pod "pod-projected-secrets-5e985bc5-8a86-4b32-8765-fa6164db4422": Phase="Pending", Reason="", readiness=false. Elapsed: 4.26966154s May 12 17:48:14.258: INFO: Pod "pod-projected-secrets-5e985bc5-8a86-4b32-8765-fa6164db4422": Phase="Pending", Reason="", readiness=false. Elapsed: 6.27284808s May 12 17:48:16.262: INFO: Pod "pod-projected-secrets-5e985bc5-8a86-4b32-8765-fa6164db4422": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.276558284s STEP: Saw pod success May 12 17:48:16.262: INFO: Pod "pod-projected-secrets-5e985bc5-8a86-4b32-8765-fa6164db4422" satisfied condition "success or failure" May 12 17:48:16.264: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-5e985bc5-8a86-4b32-8765-fa6164db4422 container projected-secret-volume-test: STEP: delete the pod May 12 17:48:16.322: INFO: Waiting for pod pod-projected-secrets-5e985bc5-8a86-4b32-8765-fa6164db4422 to disappear May 12 17:48:16.704: INFO: Pod pod-projected-secrets-5e985bc5-8a86-4b32-8765-fa6164db4422 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:48:16.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6135" for this suite. • [SLOW TEST:9.497 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3943,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:48:16.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5636.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5636.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5636.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5636.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 17:48:29.668: INFO: DNS probes using dns-test-4d115a98-25ab-4412-a996-f0873816bdf8 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5636.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5636.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5636.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5636.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 17:48:40.240: INFO: File wheezy_udp@dns-test-service-3.dns-5636.svc.cluster.local from pod dns-5636/dns-test-0c884af0-f35a-4e94-b9e1-241b6f302695 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 17:48:40.243: INFO: File jessie_udp@dns-test-service-3.dns-5636.svc.cluster.local from pod dns-5636/dns-test-0c884af0-f35a-4e94-b9e1-241b6f302695 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 17:48:40.243: INFO: Lookups using dns-5636/dns-test-0c884af0-f35a-4e94-b9e1-241b6f302695 failed for: [wheezy_udp@dns-test-service-3.dns-5636.svc.cluster.local jessie_udp@dns-test-service-3.dns-5636.svc.cluster.local] May 12 17:48:45.247: INFO: File wheezy_udp@dns-test-service-3.dns-5636.svc.cluster.local from pod dns-5636/dns-test-0c884af0-f35a-4e94-b9e1-241b6f302695 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 17:48:45.250: INFO: File jessie_udp@dns-test-service-3.dns-5636.svc.cluster.local from pod dns-5636/dns-test-0c884af0-f35a-4e94-b9e1-241b6f302695 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 17:48:45.250: INFO: Lookups using dns-5636/dns-test-0c884af0-f35a-4e94-b9e1-241b6f302695 failed for: [wheezy_udp@dns-test-service-3.dns-5636.svc.cluster.local jessie_udp@dns-test-service-3.dns-5636.svc.cluster.local] May 12 17:48:50.272: INFO: File wheezy_udp@dns-test-service-3.dns-5636.svc.cluster.local from pod dns-5636/dns-test-0c884af0-f35a-4e94-b9e1-241b6f302695 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 17:48:50.276: INFO: File jessie_udp@dns-test-service-3.dns-5636.svc.cluster.local from pod dns-5636/dns-test-0c884af0-f35a-4e94-b9e1-241b6f302695 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 17:48:50.276: INFO: Lookups using dns-5636/dns-test-0c884af0-f35a-4e94-b9e1-241b6f302695 failed for: [wheezy_udp@dns-test-service-3.dns-5636.svc.cluster.local jessie_udp@dns-test-service-3.dns-5636.svc.cluster.local] May 12 17:48:55.353: INFO: File wheezy_udp@dns-test-service-3.dns-5636.svc.cluster.local from pod dns-5636/dns-test-0c884af0-f35a-4e94-b9e1-241b6f302695 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 17:48:55.402: INFO: File jessie_udp@dns-test-service-3.dns-5636.svc.cluster.local from pod dns-5636/dns-test-0c884af0-f35a-4e94-b9e1-241b6f302695 contains 'foo.example.com. ' instead of 'bar.example.com.' May 12 17:48:55.402: INFO: Lookups using dns-5636/dns-test-0c884af0-f35a-4e94-b9e1-241b6f302695 failed for: [wheezy_udp@dns-test-service-3.dns-5636.svc.cluster.local jessie_udp@dns-test-service-3.dns-5636.svc.cluster.local] May 12 17:49:00.287: INFO: DNS probes using dns-test-0c884af0-f35a-4e94-b9e1-241b6f302695 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5636.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5636.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5636.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5636.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 17:49:13.362: INFO: DNS probes using dns-test-ae5dfd37-049e-49d5-91e7-a3469f73ffa4 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:49:13.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5636" for this suite. • [SLOW TEST:56.816 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":239,"skipped":3957,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:49:13.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 12 17:49:14.286: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. May 12 17:49:14.802: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 12 17:49:17.511: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902554, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902554, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902554, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902554, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:49:19.698: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902554, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902554, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902554, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902554, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:49:21.556: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902554, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902554, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902554, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902554, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:49:23.530: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902554, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902554, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902554, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902554, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:49:26.412: INFO: Waited 803.460266ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:49:29.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7390" for this suite. • [SLOW TEST:15.785 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":240,"skipped":3957,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:49:29.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 12 17:49:30.413: INFO: Waiting up to 5m0s for pod "downward-api-f9a5690f-aedd-4a33-8d1a-b26756054bb8" in namespace "downward-api-2944" to be "success or failure" May 12 17:49:30.439: INFO: Pod "downward-api-f9a5690f-aedd-4a33-8d1a-b26756054bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 26.928079ms May 12 17:49:32.444: INFO: Pod "downward-api-f9a5690f-aedd-4a33-8d1a-b26756054bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031041542s May 12 17:49:34.448: INFO: Pod "downward-api-f9a5690f-aedd-4a33-8d1a-b26756054bb8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035228125s May 12 17:49:36.452: INFO: Pod "downward-api-f9a5690f-aedd-4a33-8d1a-b26756054bb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039110737s STEP: Saw pod success May 12 17:49:36.452: INFO: Pod "downward-api-f9a5690f-aedd-4a33-8d1a-b26756054bb8" satisfied condition "success or failure" May 12 17:49:36.455: INFO: Trying to get logs from node jerma-worker2 pod downward-api-f9a5690f-aedd-4a33-8d1a-b26756054bb8 container dapi-container: STEP: delete the pod May 12 17:49:36.478: INFO: Waiting for pod downward-api-f9a5690f-aedd-4a33-8d1a-b26756054bb8 to disappear May 12 17:49:36.495: INFO: Pod downward-api-f9a5690f-aedd-4a33-8d1a-b26756054bb8 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:49:36.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2944" for this suite. • [SLOW TEST:6.948 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":4019,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:49:36.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0512 17:49:48.412606 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 17:49:48.412: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:49:48.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3814" for this suite. • [SLOW TEST:11.917 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":242,"skipped":4025,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:49:48.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:49:49.939: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 12 17:49:55.398: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 12 17:49:58.298: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 12 17:50:08.261: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9503 /apis/apps/v1/namespaces/deployment-9503/deployments/test-cleanup-deployment a59d240a-b011-461e-adfc-1d445d62132d 15634973 1 2020-05-12 17:49:58 +0000 UTC map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00540c058 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-12 17:50:01 +0000 UTC,LastTransitionTime:2020-05-12 17:50:01 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-05-12 17:50:06 +0000 UTC,LastTransitionTime:2020-05-12 17:50:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 12 17:50:08.263: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-9503 /apis/apps/v1/namespaces/deployment-9503/replicasets/test-cleanup-deployment-55ffc6b7b6 0b53db66-17a2-4c36-a301-e08473469090 15634961 1 2020-05-12 17:50:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment a59d240a-b011-461e-adfc-1d445d62132d 0xc00540c427 0xc00540c428}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00540c498 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 12 17:50:08.265: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-wsrlg" is available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-wsrlg test-cleanup-deployment-55ffc6b7b6- deployment-9503 /api/v1/namespaces/deployment-9503/pods/test-cleanup-deployment-55ffc6b7b6-wsrlg 3c4fd05a-9d98-41dc-b0d0-e16e1af9504a 15634960 0 2020-05-12 17:50:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 0b53db66-17a2-4c36-a301-e08473469090 0xc00540c827 0xc00540c828}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-sch57,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-sch57,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-sch57,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:50:01 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:50:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:50:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-12 17:50:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.237,StartTime:2020-05-12 17:50:01 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-12 17:50:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://a731dd4dd9ca075cdb68d42bc753b0b0a21af0e489501a4b1bd7546986390373,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.237,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:50:08.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9503" for this suite. • [SLOW TEST:19.851 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":243,"skipped":4025,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:50:08.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:50:08.472: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 12 17:50:11.832: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:50:12.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8255" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":244,"skipped":4073,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:50:12.589: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 12 17:50:13.904: INFO: >>> kubeConfig: /root/.kube/config May 12 17:50:17.366: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:50:29.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8452" for this suite. • [SLOW TEST:18.010 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":245,"skipped":4100,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:50:30.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0512 17:50:47.437530 7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 12 17:50:47.437: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:50:47.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-84" for this suite. • [SLOW TEST:16.844 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":246,"skipped":4115,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:50:47.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 12 17:50:48.039: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 12 17:50:58.642: INFO: >>> kubeConfig: /root/.kube/config May 12 17:51:02.214: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:51:14.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4946" for this suite. • [SLOW TEST:27.096 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":247,"skipped":4155,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:51:14.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 12 17:51:15.627: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 12 17:51:17.634: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902675, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902675, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902675, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902675, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:51:19.712: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902675, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902675, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902675, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902675, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 17:51:22.698: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:51:22.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:51:24.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9579" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:11.633 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":248,"skipped":4170,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:51:26.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 12 17:51:27.776: INFO: Waiting up to 5m0s for pod "pod-f7cbc3fe-3228-40fe-92f0-9d251368472d" in namespace "emptydir-2005" to be "success or failure" May 12 17:51:27.865: INFO: Pod "pod-f7cbc3fe-3228-40fe-92f0-9d251368472d": Phase="Pending", Reason="", readiness=false. Elapsed: 88.271173ms May 12 17:51:30.167: INFO: Pod "pod-f7cbc3fe-3228-40fe-92f0-9d251368472d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.390592287s May 12 17:51:32.170: INFO: Pod "pod-f7cbc3fe-3228-40fe-92f0-9d251368472d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.393313446s May 12 17:51:34.227: INFO: Pod "pod-f7cbc3fe-3228-40fe-92f0-9d251368472d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.450555263s May 12 17:51:36.271: INFO: Pod "pod-f7cbc3fe-3228-40fe-92f0-9d251368472d": Phase="Running", Reason="", readiness=true. Elapsed: 8.495234992s May 12 17:51:38.281: INFO: Pod "pod-f7cbc3fe-3228-40fe-92f0-9d251368472d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.504275455s STEP: Saw pod success May 12 17:51:38.281: INFO: Pod "pod-f7cbc3fe-3228-40fe-92f0-9d251368472d" satisfied condition "success or failure" May 12 17:51:38.282: INFO: Trying to get logs from node jerma-worker pod pod-f7cbc3fe-3228-40fe-92f0-9d251368472d container test-container: STEP: delete the pod May 12 17:51:38.752: INFO: Waiting for pod pod-f7cbc3fe-3228-40fe-92f0-9d251368472d to disappear May 12 17:51:38.953: INFO: Pod pod-f7cbc3fe-3228-40fe-92f0-9d251368472d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:51:38.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2005" for this suite. • [SLOW TEST:12.785 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4184,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:51:38.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 17:51:40.002: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 17:51:42.013: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902700, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902700, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902700, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902699, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:51:44.018: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902700, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902700, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902700, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902699, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 17:51:47.740: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:51:48.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7273-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:51:50.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-190" for this suite. STEP: Destroying namespace "webhook-190-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.210 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":250,"skipped":4184,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:51:51.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:51:51.366: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-0fc54fae-f396-4168-9e84-30b174014fd7" in namespace "security-context-test-817" to be "success or failure" May 12 17:51:51.514: INFO: Pod "alpine-nnp-false-0fc54fae-f396-4168-9e84-30b174014fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 148.654709ms May 12 17:51:53.916: INFO: Pod "alpine-nnp-false-0fc54fae-f396-4168-9e84-30b174014fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.550363615s May 12 17:51:56.319: INFO: Pod "alpine-nnp-false-0fc54fae-f396-4168-9e84-30b174014fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.953054642s May 12 17:51:58.509: INFO: Pod "alpine-nnp-false-0fc54fae-f396-4168-9e84-30b174014fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 7.143503665s May 12 17:52:00.513: INFO: Pod "alpine-nnp-false-0fc54fae-f396-4168-9e84-30b174014fd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.14754898s May 12 17:52:00.513: INFO: Pod "alpine-nnp-false-0fc54fae-f396-4168-9e84-30b174014fd7" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:52:00.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-817" for this suite. • [SLOW TEST:9.369 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4194,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:52:00.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:52:19.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2850" for this suite. • [SLOW TEST:19.041 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":252,"skipped":4201,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:52:19.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-4nlb STEP: Creating a pod to test atomic-volume-subpath May 12 17:52:20.666: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-4nlb" in namespace "subpath-2992" to be "success or failure" May 12 17:52:20.851: INFO: Pod "pod-subpath-test-secret-4nlb": Phase="Pending", Reason="", readiness=false. Elapsed: 184.177956ms May 12 17:52:23.482: INFO: Pod "pod-subpath-test-secret-4nlb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.81528033s May 12 17:52:25.490: INFO: Pod "pod-subpath-test-secret-4nlb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.823259111s May 12 17:52:27.570: INFO: Pod "pod-subpath-test-secret-4nlb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.903905776s May 12 17:52:29.575: INFO: Pod "pod-subpath-test-secret-4nlb": Phase="Running", Reason="", readiness=true. Elapsed: 8.90825136s May 12 17:52:31.578: INFO: Pod "pod-subpath-test-secret-4nlb": Phase="Running", Reason="", readiness=true. Elapsed: 10.912063091s May 12 17:52:33.585: INFO: Pod "pod-subpath-test-secret-4nlb": Phase="Running", Reason="", readiness=true. Elapsed: 12.918415075s May 12 17:52:35.588: INFO: Pod "pod-subpath-test-secret-4nlb": Phase="Running", Reason="", readiness=true. Elapsed: 14.921191691s May 12 17:52:37.592: INFO: Pod "pod-subpath-test-secret-4nlb": Phase="Running", Reason="", readiness=true. Elapsed: 16.925200129s May 12 17:52:40.000: INFO: Pod "pod-subpath-test-secret-4nlb": Phase="Running", Reason="", readiness=true. Elapsed: 19.333927948s May 12 17:52:42.003: INFO: Pod "pod-subpath-test-secret-4nlb": Phase="Running", Reason="", readiness=true. Elapsed: 21.336723512s May 12 17:52:44.007: INFO: Pod "pod-subpath-test-secret-4nlb": Phase="Running", Reason="", readiness=true. Elapsed: 23.340750723s May 12 17:52:46.012: INFO: Pod "pod-subpath-test-secret-4nlb": Phase="Running", Reason="", readiness=true. Elapsed: 25.345339064s May 12 17:52:48.016: INFO: Pod "pod-subpath-test-secret-4nlb": Phase="Running", Reason="", readiness=true. Elapsed: 27.349155912s May 12 17:52:50.018: INFO: Pod "pod-subpath-test-secret-4nlb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 29.352056723s STEP: Saw pod success May 12 17:52:50.018: INFO: Pod "pod-subpath-test-secret-4nlb" satisfied condition "success or failure" May 12 17:52:50.020: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-4nlb container test-container-subpath-secret-4nlb: STEP: delete the pod May 12 17:52:50.076: INFO: Waiting for pod pod-subpath-test-secret-4nlb to disappear May 12 17:52:50.144: INFO: Pod pod-subpath-test-secret-4nlb no longer exists STEP: Deleting pod pod-subpath-test-secret-4nlb May 12 17:52:50.144: INFO: Deleting pod "pod-subpath-test-secret-4nlb" in namespace "subpath-2992" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:52:50.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2992" for this suite. • [SLOW TEST:30.573 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":253,"skipped":4203,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:52:50.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 12 17:53:02.928: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 17:53:03.021: INFO: Pod pod-with-prestop-http-hook still exists May 12 17:53:05.022: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 17:53:05.055: INFO: Pod pod-with-prestop-http-hook still exists May 12 17:53:07.022: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 17:53:07.098: INFO: Pod pod-with-prestop-http-hook still exists May 12 17:53:09.022: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 12 17:53:09.169: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:53:09.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5087" for this suite. • [SLOW TEST:19.231 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4204,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:53:09.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 17:53:13.050: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 17:53:15.653: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902793, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902793, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902793, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902792, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:53:17.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902793, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902793, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902793, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902792, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:53:19.711: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902793, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902793, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902793, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902792, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:53:21.953: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902793, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902793, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902793, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902792, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 17:53:24.715: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:53:25.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9395" for this suite. STEP: Destroying namespace "webhook-9395-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.578 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":255,"skipped":4204,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:53:25.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 12 17:53:38.264: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 17:53:38.294: INFO: Pod pod-with-poststart-http-hook still exists May 12 17:53:40.294: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 17:53:40.297: INFO: Pod pod-with-poststart-http-hook still exists May 12 17:53:42.294: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 12 17:53:42.303: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:53:42.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8512" for this suite. • [SLOW TEST:16.349 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":4205,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:53:42.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:53:42.574: INFO: Create a RollingUpdate DaemonSet May 12 17:53:42.577: INFO: Check that daemon pods launch on every node of the cluster May 12 17:53:42.615: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:53:42.726: INFO: Number of nodes with available pods: 0 May 12 17:53:42.726: INFO: Node jerma-worker is running more than one daemon pod May 12 17:53:44.284: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:53:44.638: INFO: Number of nodes with available pods: 0 May 12 17:53:44.638: INFO: Node jerma-worker is running more than one daemon pod May 12 17:53:44.960: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:53:44.980: INFO: Number of nodes with available pods: 0 May 12 17:53:44.980: INFO: Node jerma-worker is running more than one daemon pod May 12 17:53:45.844: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:53:45.847: INFO: Number of nodes with available pods: 0 May 12 17:53:45.847: INFO: Node jerma-worker is running more than one daemon pod May 12 17:53:46.945: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:53:46.948: INFO: Number of nodes with available pods: 0 May 12 17:53:46.948: INFO: Node jerma-worker is running more than one daemon pod May 12 17:53:47.871: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:53:47.956: INFO: Number of nodes with available pods: 0 May 12 17:53:47.956: INFO: Node jerma-worker is running more than one daemon pod May 12 17:53:48.991: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:53:49.212: INFO: Number of nodes with available pods: 0 May 12 17:53:49.212: INFO: Node jerma-worker is running more than one daemon pod May 12 17:53:50.164: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:53:50.232: INFO: Number of nodes with available pods: 2 May 12 17:53:50.232: INFO: Number of running nodes: 2, number of available pods: 2 May 12 17:53:50.232: INFO: Update the DaemonSet to trigger a rollout May 12 17:53:50.501: INFO: Updating DaemonSet daemon-set May 12 17:53:59.230: INFO: Roll back the DaemonSet before rollout is complete May 12 17:54:00.699: INFO: Updating DaemonSet daemon-set May 12 17:54:00.699: INFO: Make sure DaemonSet rollback is complete May 12 17:54:00.938: INFO: Wrong image for pod: daemon-set-vqrw5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 12 17:54:00.938: INFO: Pod daemon-set-vqrw5 is not available May 12 17:54:01.272: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:54:03.141: INFO: Wrong image for pod: daemon-set-vqrw5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 12 17:54:03.141: INFO: Pod daemon-set-vqrw5 is not available May 12 17:54:03.462: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:54:04.822: INFO: Wrong image for pod: daemon-set-vqrw5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 12 17:54:04.822: INFO: Pod daemon-set-vqrw5 is not available May 12 17:54:05.704: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:54:06.279: INFO: Wrong image for pod: daemon-set-vqrw5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 12 17:54:06.279: INFO: Pod daemon-set-vqrw5 is not available May 12 17:54:06.282: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:54:07.775: INFO: Wrong image for pod: daemon-set-vqrw5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 12 17:54:07.775: INFO: Pod daemon-set-vqrw5 is not available May 12 17:54:07.778: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:54:08.901: INFO: Wrong image for pod: daemon-set-vqrw5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 12 17:54:08.901: INFO: Pod daemon-set-vqrw5 is not available May 12 17:54:09.919: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:54:10.978: INFO: Wrong image for pod: daemon-set-vqrw5. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 12 17:54:10.978: INFO: Pod daemon-set-vqrw5 is not available May 12 17:54:11.523: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 12 17:54:12.275: INFO: Pod daemon-set-qtzmg is not available May 12 17:54:12.277: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4438, will wait for the garbage collector to delete the pods May 12 17:54:12.410: INFO: Deleting DaemonSet.extensions daemon-set took: 74.340499ms May 12 17:54:12.710: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.190356ms May 12 17:54:20.319: INFO: Number of nodes with available pods: 0 May 12 17:54:20.319: INFO: Number of running nodes: 0, number of available pods: 0 May 12 17:54:20.322: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4438/daemonsets","resourceVersion":"15636273"},"items":null} May 12 17:54:20.334: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4438/pods","resourceVersion":"15636274"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:54:20.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4438" for this suite. • [SLOW TEST:38.033 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":257,"skipped":4238,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:54:20.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-608082ff-4238-498b-b3da-8c9ab0d79b39 STEP: Creating a pod to test consume secrets May 12 17:54:21.894: INFO: Waiting up to 5m0s for pod "pod-secrets-26fa28e5-01bd-4505-9206-7c100af7f4b5" in namespace "secrets-6850" to be "success or failure" May 12 17:54:22.086: INFO: Pod "pod-secrets-26fa28e5-01bd-4505-9206-7c100af7f4b5": Phase="Pending", Reason="", readiness=false. Elapsed: 192.304359ms May 12 17:54:24.089: INFO: Pod "pod-secrets-26fa28e5-01bd-4505-9206-7c100af7f4b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1951624s May 12 17:54:26.464: INFO: Pod "pod-secrets-26fa28e5-01bd-4505-9206-7c100af7f4b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.570148054s May 12 17:54:28.467: INFO: Pod "pod-secrets-26fa28e5-01bd-4505-9206-7c100af7f4b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.573544155s STEP: Saw pod success May 12 17:54:28.467: INFO: Pod "pod-secrets-26fa28e5-01bd-4505-9206-7c100af7f4b5" satisfied condition "success or failure" May 12 17:54:28.470: INFO: Trying to get logs from node jerma-worker pod pod-secrets-26fa28e5-01bd-4505-9206-7c100af7f4b5 container secret-volume-test: STEP: delete the pod May 12 17:54:28.504: INFO: Waiting for pod pod-secrets-26fa28e5-01bd-4505-9206-7c100af7f4b5 to disappear May 12 17:54:28.613: INFO: Pod pod-secrets-26fa28e5-01bd-4505-9206-7c100af7f4b5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:54:28.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6850" for this suite. • [SLOW TEST:8.274 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4252,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:54:28.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7800.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7800.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7800.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7800.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7800.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7800.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 12 17:54:48.038: INFO: DNS probes using dns-7800/dns-test-311e69eb-5a2c-402e-87e7-05f58191b747 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:54:48.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7800" for this suite. • [SLOW TEST:19.946 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":259,"skipped":4294,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:54:48.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-aeb79d17-74f3-4eb2-b147-84293349eb1e STEP: Creating secret with name s-test-opt-upd-52a1cfab-6f5b-45c2-bf28-58663402885a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-aeb79d17-74f3-4eb2-b147-84293349eb1e STEP: Updating secret s-test-opt-upd-52a1cfab-6f5b-45c2-bf28-58663402885a STEP: Creating secret with name s-test-opt-create-8705bbd3-4e3b-42d3-917d-3986700b055e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:55:10.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9595" for this suite. • [SLOW TEST:21.911 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4302,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:55:10.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:55:31.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2684" for this suite. STEP: Destroying namespace "nsdeletetest-1049" for this suite. May 12 17:55:31.042: INFO: Namespace nsdeletetest-1049 was already deleted STEP: Destroying namespace "nsdeletetest-9974" for this suite. • [SLOW TEST:20.568 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":261,"skipped":4306,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:55:31.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 12 17:55:31.391: INFO: Waiting up to 5m0s for pod "pod-c1a59e9f-d1e5-49d0-902d-3db5540b9e02" in namespace "emptydir-1020" to be "success or failure" May 12 17:55:31.421: INFO: Pod "pod-c1a59e9f-d1e5-49d0-902d-3db5540b9e02": Phase="Pending", Reason="", readiness=false. Elapsed: 29.669572ms May 12 17:55:33.423: INFO: Pod "pod-c1a59e9f-d1e5-49d0-902d-3db5540b9e02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032361995s May 12 17:55:35.438: INFO: Pod "pod-c1a59e9f-d1e5-49d0-902d-3db5540b9e02": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04680028s May 12 17:55:37.473: INFO: Pod "pod-c1a59e9f-d1e5-49d0-902d-3db5540b9e02": Phase="Running", Reason="", readiness=true. Elapsed: 6.081775604s May 12 17:55:39.572: INFO: Pod "pod-c1a59e9f-d1e5-49d0-902d-3db5540b9e02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.180611735s STEP: Saw pod success May 12 17:55:39.572: INFO: Pod "pod-c1a59e9f-d1e5-49d0-902d-3db5540b9e02" satisfied condition "success or failure" May 12 17:55:39.574: INFO: Trying to get logs from node jerma-worker pod pod-c1a59e9f-d1e5-49d0-902d-3db5540b9e02 container test-container: STEP: delete the pod May 12 17:55:39.900: INFO: Waiting for pod pod-c1a59e9f-d1e5-49d0-902d-3db5540b9e02 to disappear May 12 17:55:39.940: INFO: Pod pod-c1a59e9f-d1e5-49d0-902d-3db5540b9e02 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:55:39.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1020" for this suite. • [SLOW TEST:8.901 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4308,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:55:39.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 12 17:55:40.954: INFO: Waiting up to 5m0s for pod "pod-054ebc8f-948b-4af9-b6b3-9d3394aa6119" in namespace "emptydir-9506" to be "success or failure" May 12 17:55:40.982: INFO: Pod "pod-054ebc8f-948b-4af9-b6b3-9d3394aa6119": Phase="Pending", Reason="", readiness=false. Elapsed: 27.69133ms May 12 17:55:43.153: INFO: Pod "pod-054ebc8f-948b-4af9-b6b3-9d3394aa6119": Phase="Pending", Reason="", readiness=false. Elapsed: 2.199007034s May 12 17:55:45.361: INFO: Pod "pod-054ebc8f-948b-4af9-b6b3-9d3394aa6119": Phase="Pending", Reason="", readiness=false. Elapsed: 4.407046413s May 12 17:55:47.364: INFO: Pod "pod-054ebc8f-948b-4af9-b6b3-9d3394aa6119": Phase="Running", Reason="", readiness=true. Elapsed: 6.409822287s May 12 17:55:49.367: INFO: Pod "pod-054ebc8f-948b-4af9-b6b3-9d3394aa6119": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.412459821s STEP: Saw pod success May 12 17:55:49.367: INFO: Pod "pod-054ebc8f-948b-4af9-b6b3-9d3394aa6119" satisfied condition "success or failure" May 12 17:55:49.369: INFO: Trying to get logs from node jerma-worker pod pod-054ebc8f-948b-4af9-b6b3-9d3394aa6119 container test-container: STEP: delete the pod May 12 17:55:49.754: INFO: Waiting for pod pod-054ebc8f-948b-4af9-b6b3-9d3394aa6119 to disappear May 12 17:55:49.784: INFO: Pod pod-054ebc8f-948b-4af9-b6b3-9d3394aa6119 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:55:49.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9506" for this suite. • [SLOW TEST:9.993 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4326,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:55:49.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 12 17:55:50.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-3724' May 12 17:55:58.814: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 12 17:55:58.814: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 May 12 17:56:01.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3724' May 12 17:56:01.983: INFO: stderr: "" May 12 17:56:01.983: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:56:01.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3724" for this suite. • [SLOW TEST:12.048 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1622 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":264,"skipped":4328,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:56:01.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 17:56:05.573: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 17:56:07.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902965, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902965, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902966, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902965, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:56:09.833: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902965, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902965, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902966, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902965, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 17:56:13.189: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:56:13.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:56:15.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6253" for this suite. STEP: Destroying namespace "webhook-6253-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.963 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":265,"skipped":4349,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:56:16.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:56:25.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4478" for this suite. • [SLOW TEST:8.684 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4370,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:56:25.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod May 12 17:56:32.447: INFO: Pod pod-hostip-53dec948-24f7-44d3-a8bb-ccd3040a9749 has hostIP: 172.17.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:56:32.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5994" for this suite. • [SLOW TEST:6.847 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":267,"skipped":4370,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:56:32.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 12 17:56:33.146: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:56:36.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2021" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":268,"skipped":4391,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:56:36.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 17:56:39.558: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 17:56:41.578: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724903000, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724903000, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724903000, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902999, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:56:43.825: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724903000, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724903000, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724903000, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902999, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:56:45.585: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724903000, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724903000, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724903000, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724902999, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 17:56:48.621: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:56:48.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6080" for this suite. STEP: Destroying namespace "webhook-6080-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.535 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":269,"skipped":4404,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:56:49.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 12 17:56:57.358: INFO: Successfully updated pod "annotationupdate14d5f088-ca43-4dd4-bec0-ddeae9aef4a6" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:57:01.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-541" for this suite. • [SLOW TEST:12.152 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4422,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:57:01.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server May 12 17:57:01.631: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:57:01.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9690" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":271,"skipped":4430,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:57:01.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 12 17:57:03.939: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 12 17:57:06.404: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724903023, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724903023, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724903024, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724903023, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 12 17:57:08.477: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724903023, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724903023, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724903024, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724903023, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 12 17:57:11.532: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:57:11.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6959" for this suite. STEP: Destroying namespace "webhook-6959-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.373 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":272,"skipped":4453,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:57:12.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-50c939e8-859d-4947-8106-1ada4baaf03c STEP: Creating a pod to test consume secrets May 12 17:57:12.342: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1e5c3554-25ef-439d-b569-3d7d99dbb7ef" in namespace "projected-232" to be "success or failure" May 12 17:57:12.350: INFO: Pod "pod-projected-secrets-1e5c3554-25ef-439d-b569-3d7d99dbb7ef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.474985ms May 12 17:57:14.353: INFO: Pod "pod-projected-secrets-1e5c3554-25ef-439d-b569-3d7d99dbb7ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010670402s May 12 17:57:16.356: INFO: Pod "pod-projected-secrets-1e5c3554-25ef-439d-b569-3d7d99dbb7ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014512606s May 12 17:57:19.017: INFO: Pod "pod-projected-secrets-1e5c3554-25ef-439d-b569-3d7d99dbb7ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.675304941s STEP: Saw pod success May 12 17:57:19.017: INFO: Pod "pod-projected-secrets-1e5c3554-25ef-439d-b569-3d7d99dbb7ef" satisfied condition "success or failure" May 12 17:57:19.020: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-1e5c3554-25ef-439d-b569-3d7d99dbb7ef container secret-volume-test: STEP: delete the pod May 12 17:57:19.372: INFO: Waiting for pod pod-projected-secrets-1e5c3554-25ef-439d-b569-3d7d99dbb7ef to disappear May 12 17:57:19.406: INFO: Pod pod-projected-secrets-1e5c3554-25ef-439d-b569-3d7d99dbb7ef no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:57:19.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-232" for this suite. • [SLOW TEST:7.342 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4468,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:57:19.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 12 17:57:19.886: INFO: Waiting up to 5m0s for pod "pod-6ae81717-a912-4e72-aa86-2f8cc1274693" in namespace "emptydir-1146" to be "success or failure" May 12 17:57:19.939: INFO: Pod "pod-6ae81717-a912-4e72-aa86-2f8cc1274693": Phase="Pending", Reason="", readiness=false. Elapsed: 53.121928ms May 12 17:57:22.124: INFO: Pod "pod-6ae81717-a912-4e72-aa86-2f8cc1274693": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23778303s May 12 17:57:24.148: INFO: Pod "pod-6ae81717-a912-4e72-aa86-2f8cc1274693": Phase="Pending", Reason="", readiness=false. Elapsed: 4.261588555s May 12 17:57:26.166: INFO: Pod "pod-6ae81717-a912-4e72-aa86-2f8cc1274693": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.28004962s STEP: Saw pod success May 12 17:57:26.166: INFO: Pod "pod-6ae81717-a912-4e72-aa86-2f8cc1274693" satisfied condition "success or failure" May 12 17:57:26.169: INFO: Trying to get logs from node jerma-worker2 pod pod-6ae81717-a912-4e72-aa86-2f8cc1274693 container test-container: STEP: delete the pod May 12 17:57:26.222: INFO: Waiting for pod pod-6ae81717-a912-4e72-aa86-2f8cc1274693 to disappear May 12 17:57:26.226: INFO: Pod pod-6ae81717-a912-4e72-aa86-2f8cc1274693 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:57:26.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1146" for this suite. • [SLOW TEST:6.880 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4470,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:57:26.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-45e8d3e0-1131-417e-9e20-33ce7759ab28 May 12 17:57:26.663: INFO: Pod name my-hostname-basic-45e8d3e0-1131-417e-9e20-33ce7759ab28: Found 0 pods out of 1 May 12 17:57:31.915: INFO: Pod name my-hostname-basic-45e8d3e0-1131-417e-9e20-33ce7759ab28: Found 1 pods out of 1 May 12 17:57:31.915: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-45e8d3e0-1131-417e-9e20-33ce7759ab28" are running May 12 17:57:33.982: INFO: Pod "my-hostname-basic-45e8d3e0-1131-417e-9e20-33ce7759ab28-rxbdn" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 17:57:26 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 17:57:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-45e8d3e0-1131-417e-9e20-33ce7759ab28]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 17:57:26 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-45e8d3e0-1131-417e-9e20-33ce7759ab28]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-12 17:57:26 +0000 UTC Reason: Message:}]) May 12 17:57:33.982: INFO: Trying to dial the pod May 12 17:57:38.991: INFO: Controller my-hostname-basic-45e8d3e0-1131-417e-9e20-33ce7759ab28: Got expected result from replica 1 [my-hostname-basic-45e8d3e0-1131-417e-9e20-33ce7759ab28-rxbdn]: "my-hostname-basic-45e8d3e0-1131-417e-9e20-33ce7759ab28-rxbdn", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:57:38.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-163" for this suite. • [SLOW TEST:12.660 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":275,"skipped":4476,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:57:38.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:57:47.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9480" for this suite. • [SLOW TEST:8.963 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4488,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 12 17:57:47.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 12 17:58:05.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3387" for this suite. • [SLOW TEST:17.772 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":277,"skipped":4558,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} SSSSSSMay 12 17:58:05.732: INFO: Running AfterSuite actions on all nodes May 12 17:58:05.732: INFO: Running AfterSuite actions on node 1 May 12 17:58:05.732: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":277,"skipped":4564,"failed":1,"failures":["[sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]"]} Summarizing 1 Failure: [Fail] [sig-cli] Kubectl client Kubectl logs [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1410 Ran 278 of 4842 Specs in 6242.184 seconds FAIL! -- 277 Passed | 1 Failed | 0 Pending | 4564 Skipped --- FAIL: TestE2E (6242.27s) FAIL