I0506 22:50:05.319779 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0506 22:50:05.320001 6 e2e.go:109] Starting e2e run "dda4c29b-ce4d-4fdd-b877-1cb0da7a3874" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588805404 - Will randomize all specs Will run 278 of 4842 specs May 6 22:50:05.389: INFO: >>> kubeConfig: /root/.kube/config May 6 22:50:05.392: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 6 22:50:05.416: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 6 22:50:05.453: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 6 22:50:05.453: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 6 22:50:05.453: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 6 22:50:05.462: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 6 22:50:05.462: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 6 22:50:05.462: INFO: e2e test version: v1.17.4 May 6 22:50:05.463: INFO: kube-apiserver version: v1.17.2 May 6 22:50:05.463: INFO: >>> kubeConfig: /root/.kube/config May 6 22:50:05.469: INFO: Cluster IP family: ipv4 SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:50:05.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected May 6 22:50:05.551: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 22:50:05.558: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f7ae2bf8-7af1-4ffc-9864-b7bb9f8569a8" in namespace "projected-4686" to be "success or failure" May 6 22:50:05.563: INFO: Pod "downwardapi-volume-f7ae2bf8-7af1-4ffc-9864-b7bb9f8569a8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.442919ms May 6 22:50:07.692: INFO: Pod "downwardapi-volume-f7ae2bf8-7af1-4ffc-9864-b7bb9f8569a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134217672s May 6 22:50:09.696: INFO: Pod "downwardapi-volume-f7ae2bf8-7af1-4ffc-9864-b7bb9f8569a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.138697456s STEP: Saw pod success May 6 22:50:09.697: INFO: Pod "downwardapi-volume-f7ae2bf8-7af1-4ffc-9864-b7bb9f8569a8" satisfied condition "success or failure" May 6 22:50:09.700: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f7ae2bf8-7af1-4ffc-9864-b7bb9f8569a8 container client-container: STEP: delete the pod May 6 22:50:09.782: INFO: Waiting for pod downwardapi-volume-f7ae2bf8-7af1-4ffc-9864-b7bb9f8569a8 to disappear May 6 22:50:09.907: INFO: Pod downwardapi-volume-f7ae2bf8-7af1-4ffc-9864-b7bb9f8569a8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:50:09.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4686" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":1,"skipped":11,"failed":0} SSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:50:09.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:50:34.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3644" for this suite. • [SLOW TEST:24.481 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":2,"skipped":14,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:50:34.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-1c0943d0-73f5-42b7-a78a-69c467d003a2 STEP: Creating a pod to test consume configMaps May 6 22:50:34.512: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4745d16c-83d7-4fa0-8c10-a09f6ac59486" in namespace "projected-1517" to be "success or failure" May 6 22:50:34.620: INFO: Pod "pod-projected-configmaps-4745d16c-83d7-4fa0-8c10-a09f6ac59486": Phase="Pending", Reason="", readiness=false. Elapsed: 107.679316ms May 6 22:50:36.624: INFO: Pod "pod-projected-configmaps-4745d16c-83d7-4fa0-8c10-a09f6ac59486": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111872024s May 6 22:50:38.629: INFO: Pod "pod-projected-configmaps-4745d16c-83d7-4fa0-8c10-a09f6ac59486": Phase="Pending", Reason="", readiness=false. Elapsed: 4.11629127s May 6 22:50:40.657: INFO: Pod "pod-projected-configmaps-4745d16c-83d7-4fa0-8c10-a09f6ac59486": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.144790847s STEP: Saw pod success May 6 22:50:40.657: INFO: Pod "pod-projected-configmaps-4745d16c-83d7-4fa0-8c10-a09f6ac59486" satisfied condition "success or failure" May 6 22:50:40.675: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-4745d16c-83d7-4fa0-8c10-a09f6ac59486 container projected-configmap-volume-test: STEP: delete the pod May 6 22:50:41.058: INFO: Waiting for pod pod-projected-configmaps-4745d16c-83d7-4fa0-8c10-a09f6ac59486 to disappear May 6 22:50:41.063: INFO: Pod pod-projected-configmaps-4745d16c-83d7-4fa0-8c10-a09f6ac59486 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:50:41.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1517" for this suite. • [SLOW TEST:6.674 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":16,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:50:41.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 6 22:50:47.242: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1711 PodName:pod-sharedvolume-37c0ced9-8586-4b05-a63b-cb700e11f98f ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 22:50:47.242: INFO: >>> kubeConfig: /root/.kube/config I0506 22:50:47.267949 6 log.go:172] (0xc001cead10) (0xc00275f220) Create stream I0506 22:50:47.267982 6 log.go:172] (0xc001cead10) (0xc00275f220) Stream added, broadcasting: 1 I0506 22:50:47.270995 6 log.go:172] (0xc001cead10) Reply frame received for 1 I0506 22:50:47.271033 6 log.go:172] (0xc001cead10) (0xc0027d7360) Create stream I0506 22:50:47.271057 6 log.go:172] (0xc001cead10) (0xc0027d7360) Stream added, broadcasting: 3 I0506 22:50:47.271829 6 log.go:172] (0xc001cead10) Reply frame received for 3 I0506 22:50:47.271883 6 log.go:172] (0xc001cead10) (0xc0028a7ae0) Create stream I0506 22:50:47.271896 6 log.go:172] (0xc001cead10) (0xc0028a7ae0) Stream added, broadcasting: 5 I0506 22:50:47.272753 6 log.go:172] (0xc001cead10) Reply frame received for 5 I0506 22:50:47.358850 6 log.go:172] (0xc001cead10) Data frame received for 3 I0506 22:50:47.358885 6 log.go:172] (0xc0027d7360) (3) Data frame handling I0506 22:50:47.358898 6 log.go:172] (0xc0027d7360) (3) Data frame sent I0506 22:50:47.358914 6 log.go:172] (0xc001cead10) Data frame received for 5 I0506 22:50:47.358944 6 log.go:172] (0xc0028a7ae0) (5) Data frame handling I0506 22:50:47.358963 6 log.go:172] (0xc001cead10) Data frame received for 3 I0506 22:50:47.358970 6 log.go:172] (0xc0027d7360) (3) Data frame handling I0506 22:50:47.365612 6 log.go:172] (0xc001cead10) Data frame received for 1 I0506 22:50:47.365729 6 log.go:172] (0xc00275f220) (1) Data frame handling I0506 22:50:47.365782 6 log.go:172] (0xc00275f220) (1) Data frame sent I0506 22:50:47.365823 6 log.go:172] (0xc001cead10) (0xc00275f220) Stream removed, broadcasting: 1 I0506 22:50:47.365867 6 log.go:172] (0xc001cead10) Go away received I0506 22:50:47.366273 6 log.go:172] (0xc001cead10) (0xc00275f220) Stream removed, broadcasting: 1 I0506 22:50:47.366291 6 log.go:172] (0xc001cead10) (0xc0027d7360) Stream removed, broadcasting: 3 I0506 22:50:47.366300 6 log.go:172] (0xc001cead10) (0xc0028a7ae0) Stream removed, broadcasting: 5 May 6 22:50:47.366: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:50:47.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1711" for this suite. • [SLOW TEST:6.302 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":4,"skipped":39,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:50:47.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 6 22:50:47.467: INFO: Waiting up to 5m0s for pod "pod-3f6b4901-7970-46fb-8169-05a8d8031390" in namespace "emptydir-8201" to be "success or failure" May 6 22:50:47.471: INFO: Pod "pod-3f6b4901-7970-46fb-8169-05a8d8031390": Phase="Pending", Reason="", readiness=false. Elapsed: 3.910444ms May 6 22:50:49.475: INFO: Pod "pod-3f6b4901-7970-46fb-8169-05a8d8031390": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008407364s May 6 22:50:51.479: INFO: Pod "pod-3f6b4901-7970-46fb-8169-05a8d8031390": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012286095s May 6 22:50:53.483: INFO: Pod "pod-3f6b4901-7970-46fb-8169-05a8d8031390": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016364688s STEP: Saw pod success May 6 22:50:53.484: INFO: Pod "pod-3f6b4901-7970-46fb-8169-05a8d8031390" satisfied condition "success or failure" May 6 22:50:53.486: INFO: Trying to get logs from node jerma-worker pod pod-3f6b4901-7970-46fb-8169-05a8d8031390 container test-container: STEP: delete the pod May 6 22:50:53.570: INFO: Waiting for pod pod-3f6b4901-7970-46fb-8169-05a8d8031390 to disappear May 6 22:50:53.577: INFO: Pod pod-3f6b4901-7970-46fb-8169-05a8d8031390 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:50:53.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8201" for this suite. • [SLOW TEST:6.210 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":5,"skipped":60,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:50:53.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-392 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 6 22:50:53.715: INFO: Found 0 stateful pods, waiting for 3 May 6 22:51:03.759: INFO: Found 2 stateful pods, waiting for 3 May 6 22:51:13.728: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 22:51:13.729: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 22:51:13.729: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 6 22:51:13.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-392 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 22:51:20.499: INFO: stderr: "I0506 22:51:20.374828 28 log.go:172] (0xc0008b7600) (0xc0006f9d60) Create stream\nI0506 22:51:20.374898 28 log.go:172] (0xc0008b7600) (0xc0006f9d60) Stream added, broadcasting: 1\nI0506 22:51:20.377362 28 log.go:172] (0xc0008b7600) Reply frame received for 1\nI0506 22:51:20.377402 28 log.go:172] (0xc0008b7600) (0xc000634500) Create stream\nI0506 22:51:20.377414 28 log.go:172] (0xc0008b7600) (0xc000634500) Stream added, broadcasting: 3\nI0506 22:51:20.378209 28 log.go:172] (0xc0008b7600) Reply frame received for 3\nI0506 22:51:20.378260 28 log.go:172] (0xc0008b7600) (0xc0004892c0) Create stream\nI0506 22:51:20.378278 28 log.go:172] (0xc0008b7600) (0xc0004892c0) Stream added, broadcasting: 5\nI0506 22:51:20.378966 28 log.go:172] (0xc0008b7600) Reply frame received for 5\nI0506 22:51:20.433750 28 log.go:172] (0xc0008b7600) Data frame received for 5\nI0506 22:51:20.433776 28 log.go:172] (0xc0004892c0) (5) Data frame handling\nI0506 22:51:20.433790 28 log.go:172] (0xc0004892c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 22:51:20.490029 28 log.go:172] (0xc0008b7600) Data frame received for 3\nI0506 22:51:20.490055 28 log.go:172] (0xc000634500) (3) Data frame handling\nI0506 22:51:20.490235 28 log.go:172] (0xc000634500) (3) Data frame sent\nI0506 22:51:20.490381 28 log.go:172] (0xc0008b7600) Data frame received for 3\nI0506 22:51:20.490441 28 log.go:172] (0xc000634500) (3) Data frame handling\nI0506 22:51:20.490696 28 log.go:172] (0xc0008b7600) Data frame received for 5\nI0506 22:51:20.490719 28 log.go:172] (0xc0004892c0) (5) Data frame handling\nI0506 22:51:20.492208 28 log.go:172] (0xc0008b7600) Data frame received for 1\nI0506 22:51:20.492225 28 log.go:172] (0xc0006f9d60) (1) Data frame handling\nI0506 22:51:20.492236 28 log.go:172] (0xc0006f9d60) (1) Data frame sent\nI0506 22:51:20.492320 28 log.go:172] (0xc0008b7600) (0xc0006f9d60) Stream removed, broadcasting: 1\nI0506 22:51:20.492467 28 log.go:172] (0xc0008b7600) Go away received\nI0506 22:51:20.493036 28 log.go:172] (0xc0008b7600) (0xc0006f9d60) Stream removed, broadcasting: 1\nI0506 22:51:20.493059 28 log.go:172] (0xc0008b7600) (0xc000634500) Stream removed, broadcasting: 3\nI0506 22:51:20.493072 28 log.go:172] (0xc0008b7600) (0xc0004892c0) Stream removed, broadcasting: 5\n" May 6 22:51:20.499: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 22:51:20.499: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 6 22:51:30.548: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 6 22:51:40.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-392 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:51:41.036: INFO: stderr: "I0506 22:51:40.939333 62 log.go:172] (0xc000540f20) (0xc000669ae0) Create stream\nI0506 22:51:40.939387 62 log.go:172] (0xc000540f20) (0xc000669ae0) Stream added, broadcasting: 1\nI0506 22:51:40.956410 62 log.go:172] (0xc000540f20) Reply frame received for 1\nI0506 22:51:40.956470 62 log.go:172] (0xc000540f20) (0xc0008000a0) Create stream\nI0506 22:51:40.956482 62 log.go:172] (0xc000540f20) (0xc0008000a0) Stream added, broadcasting: 3\nI0506 22:51:40.957862 62 log.go:172] (0xc000540f20) Reply frame received for 3\nI0506 22:51:40.957899 62 log.go:172] (0xc000540f20) (0xc000800140) Create stream\nI0506 22:51:40.957906 62 log.go:172] (0xc000540f20) (0xc000800140) Stream added, broadcasting: 5\nI0506 22:51:40.962217 62 log.go:172] (0xc000540f20) Reply frame received for 5\nI0506 22:51:41.028399 62 log.go:172] (0xc000540f20) Data frame received for 3\nI0506 22:51:41.028438 62 log.go:172] (0xc0008000a0) (3) Data frame handling\nI0506 22:51:41.028454 62 log.go:172] (0xc0008000a0) (3) Data frame sent\nI0506 22:51:41.028469 62 log.go:172] (0xc000540f20) Data frame received for 3\nI0506 22:51:41.028479 62 log.go:172] (0xc0008000a0) (3) Data frame handling\nI0506 22:51:41.028492 62 log.go:172] (0xc000540f20) Data frame received for 5\nI0506 22:51:41.028499 62 log.go:172] (0xc000800140) (5) Data frame handling\nI0506 22:51:41.028508 62 log.go:172] (0xc000800140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 22:51:41.028646 62 log.go:172] (0xc000540f20) Data frame received for 5\nI0506 22:51:41.028680 62 log.go:172] (0xc000800140) (5) Data frame handling\nI0506 22:51:41.030545 62 log.go:172] (0xc000540f20) Data frame received for 1\nI0506 22:51:41.030559 62 log.go:172] (0xc000669ae0) (1) Data frame handling\nI0506 22:51:41.030566 62 log.go:172] (0xc000669ae0) (1) Data frame sent\nI0506 22:51:41.030573 62 log.go:172] (0xc000540f20) (0xc000669ae0) Stream removed, broadcasting: 1\nI0506 22:51:41.030581 62 log.go:172] (0xc000540f20) Go away received\nI0506 22:51:41.030983 62 log.go:172] (0xc000540f20) (0xc000669ae0) Stream removed, broadcasting: 1\nI0506 22:51:41.031011 62 log.go:172] (0xc000540f20) (0xc0008000a0) Stream removed, broadcasting: 3\nI0506 22:51:41.031041 62 log.go:172] (0xc000540f20) (0xc000800140) Stream removed, broadcasting: 5\n" May 6 22:51:41.036: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 22:51:41.036: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 22:52:21.179: INFO: Waiting for StatefulSet statefulset-392/ss2 to complete update STEP: Rolling back to a previous revision May 6 22:52:31.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-392 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 22:52:31.855: INFO: stderr: "I0506 22:52:31.317892 83 log.go:172] (0xc000a3cc60) (0xc000a20320) Create stream\nI0506 22:52:31.317947 83 log.go:172] (0xc000a3cc60) (0xc000a20320) Stream added, broadcasting: 1\nI0506 22:52:31.321319 83 log.go:172] (0xc000a3cc60) Reply frame received for 1\nI0506 22:52:31.321383 83 log.go:172] (0xc000a3cc60) (0xc000a203c0) Create stream\nI0506 22:52:31.321412 83 log.go:172] (0xc000a3cc60) (0xc000a203c0) Stream added, broadcasting: 3\nI0506 22:52:31.322565 83 log.go:172] (0xc000a3cc60) Reply frame received for 3\nI0506 22:52:31.322605 83 log.go:172] (0xc000a3cc60) (0xc000a0e1e0) Create stream\nI0506 22:52:31.322615 83 log.go:172] (0xc000a3cc60) (0xc000a0e1e0) Stream added, broadcasting: 5\nI0506 22:52:31.323523 83 log.go:172] (0xc000a3cc60) Reply frame received for 5\nI0506 22:52:31.385339 83 log.go:172] (0xc000a3cc60) Data frame received for 5\nI0506 22:52:31.385470 83 log.go:172] (0xc000a0e1e0) (5) Data frame handling\nI0506 22:52:31.385493 83 log.go:172] (0xc000a0e1e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 22:52:31.847368 83 log.go:172] (0xc000a3cc60) Data frame received for 3\nI0506 22:52:31.847405 83 log.go:172] (0xc000a203c0) (3) Data frame handling\nI0506 22:52:31.847422 83 log.go:172] (0xc000a203c0) (3) Data frame sent\nI0506 22:52:31.847541 83 log.go:172] (0xc000a3cc60) Data frame received for 5\nI0506 22:52:31.847626 83 log.go:172] (0xc000a0e1e0) (5) Data frame handling\nI0506 22:52:31.847773 83 log.go:172] (0xc000a3cc60) Data frame received for 3\nI0506 22:52:31.847789 83 log.go:172] (0xc000a203c0) (3) Data frame handling\nI0506 22:52:31.850114 83 log.go:172] (0xc000a3cc60) Data frame received for 1\nI0506 22:52:31.850149 83 log.go:172] (0xc000a20320) (1) Data frame handling\nI0506 22:52:31.850166 83 log.go:172] (0xc000a20320) (1) Data frame sent\nI0506 22:52:31.850222 83 log.go:172] (0xc000a3cc60) (0xc000a20320) Stream removed, broadcasting: 1\nI0506 22:52:31.850252 83 log.go:172] (0xc000a3cc60) Go away received\nI0506 22:52:31.850713 83 log.go:172] (0xc000a3cc60) (0xc000a20320) Stream removed, broadcasting: 1\nI0506 22:52:31.850735 83 log.go:172] (0xc000a3cc60) (0xc000a203c0) Stream removed, broadcasting: 3\nI0506 22:52:31.850747 83 log.go:172] (0xc000a3cc60) (0xc000a0e1e0) Stream removed, broadcasting: 5\n" May 6 22:52:31.855: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 22:52:31.855: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 22:52:41.886: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 6 22:52:51.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-392 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 22:52:52.163: INFO: stderr: "I0506 22:52:52.092945 102 log.go:172] (0xc000105290) (0xc0006a9ae0) Create stream\nI0506 22:52:52.092994 102 log.go:172] (0xc000105290) (0xc0006a9ae0) Stream added, broadcasting: 1\nI0506 22:52:52.095501 102 log.go:172] (0xc000105290) Reply frame received for 1\nI0506 22:52:52.095531 102 log.go:172] (0xc000105290) (0xc0009b4000) Create stream\nI0506 22:52:52.095539 102 log.go:172] (0xc000105290) (0xc0009b4000) Stream added, broadcasting: 3\nI0506 22:52:52.096396 102 log.go:172] (0xc000105290) Reply frame received for 3\nI0506 22:52:52.096447 102 log.go:172] (0xc000105290) (0xc000286000) Create stream\nI0506 22:52:52.096463 102 log.go:172] (0xc000105290) (0xc000286000) Stream added, broadcasting: 5\nI0506 22:52:52.097382 102 log.go:172] (0xc000105290) Reply frame received for 5\nI0506 22:52:52.156965 102 log.go:172] (0xc000105290) Data frame received for 3\nI0506 22:52:52.156989 102 log.go:172] (0xc0009b4000) (3) Data frame handling\nI0506 22:52:52.157010 102 log.go:172] (0xc0009b4000) (3) Data frame sent\nI0506 22:52:52.157025 102 log.go:172] (0xc000105290) Data frame received for 3\nI0506 22:52:52.157031 102 log.go:172] (0xc0009b4000) (3) Data frame handling\nI0506 22:52:52.157610 102 log.go:172] (0xc000105290) Data frame received for 5\nI0506 22:52:52.157640 102 log.go:172] (0xc000286000) (5) Data frame handling\nI0506 22:52:52.157657 102 log.go:172] (0xc000286000) (5) Data frame sent\nI0506 22:52:52.157669 102 log.go:172] (0xc000105290) Data frame received for 5\nI0506 22:52:52.157679 102 log.go:172] (0xc000286000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 22:52:52.158709 102 log.go:172] (0xc000105290) Data frame received for 1\nI0506 22:52:52.158734 102 log.go:172] (0xc0006a9ae0) (1) Data frame handling\nI0506 22:52:52.158755 102 log.go:172] (0xc0006a9ae0) (1) Data frame sent\nI0506 22:52:52.158776 102 log.go:172] (0xc000105290) (0xc0006a9ae0) Stream removed, broadcasting: 1\nI0506 22:52:52.158864 102 log.go:172] (0xc000105290) Go away received\nI0506 22:52:52.159236 102 log.go:172] (0xc000105290) (0xc0006a9ae0) Stream removed, broadcasting: 1\nI0506 22:52:52.159263 102 log.go:172] (0xc000105290) (0xc0009b4000) Stream removed, broadcasting: 3\nI0506 22:52:52.159280 102 log.go:172] (0xc000105290) (0xc000286000) Stream removed, broadcasting: 5\n" May 6 22:52:52.163: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 22:52:52.163: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 22:53:22.185: INFO: Waiting for StatefulSet statefulset-392/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 6 22:53:32.193: INFO: Deleting all statefulset in ns statefulset-392 May 6 22:53:32.195: INFO: Scaling statefulset ss2 to 0 May 6 22:54:02.214: INFO: Waiting for statefulset status.replicas updated to 0 May 6 22:54:02.217: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:54:02.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-392" for this suite. • [SLOW TEST:188.657 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":6,"skipped":70,"failed":0} SSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:54:02.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command May 6 22:54:02.340: INFO: Waiting up to 5m0s for pod "client-containers-9b663fef-ce2a-4930-b1cb-ebaa0d30d015" in namespace "containers-3568" to be "success or failure" May 6 22:54:02.346: INFO: Pod "client-containers-9b663fef-ce2a-4930-b1cb-ebaa0d30d015": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089773ms May 6 22:54:04.350: INFO: Pod "client-containers-9b663fef-ce2a-4930-b1cb-ebaa0d30d015": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009630219s May 6 22:54:06.354: INFO: Pod "client-containers-9b663fef-ce2a-4930-b1cb-ebaa0d30d015": Phase="Running", Reason="", readiness=true. Elapsed: 4.013898677s May 6 22:54:08.359: INFO: Pod "client-containers-9b663fef-ce2a-4930-b1cb-ebaa0d30d015": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018895819s STEP: Saw pod success May 6 22:54:08.359: INFO: Pod "client-containers-9b663fef-ce2a-4930-b1cb-ebaa0d30d015" satisfied condition "success or failure" May 6 22:54:08.361: INFO: Trying to get logs from node jerma-worker2 pod client-containers-9b663fef-ce2a-4930-b1cb-ebaa0d30d015 container test-container: STEP: delete the pod May 6 22:54:08.444: INFO: Waiting for pod client-containers-9b663fef-ce2a-4930-b1cb-ebaa0d30d015 to disappear May 6 22:54:08.455: INFO: Pod client-containers-9b663fef-ce2a-4930-b1cb-ebaa0d30d015 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:54:08.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3568" for this suite. • [SLOW TEST:6.222 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":76,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:54:08.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:54:24.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9696" for this suite. • [SLOW TEST:16.164 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":8,"skipped":127,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:54:24.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 6 22:54:24.743: INFO: Waiting up to 5m0s for pod "pod-b13c4b0a-6206-468d-9a0f-69fecee2714e" in namespace "emptydir-1383" to be "success or failure" May 6 22:54:24.770: INFO: Pod "pod-b13c4b0a-6206-468d-9a0f-69fecee2714e": Phase="Pending", Reason="", readiness=false. Elapsed: 27.140554ms May 6 22:54:26.774: INFO: Pod "pod-b13c4b0a-6206-468d-9a0f-69fecee2714e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030541113s May 6 22:54:28.851: INFO: Pod "pod-b13c4b0a-6206-468d-9a0f-69fecee2714e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107725841s May 6 22:54:30.855: INFO: Pod "pod-b13c4b0a-6206-468d-9a0f-69fecee2714e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111970912s STEP: Saw pod success May 6 22:54:30.855: INFO: Pod "pod-b13c4b0a-6206-468d-9a0f-69fecee2714e" satisfied condition "success or failure" May 6 22:54:30.859: INFO: Trying to get logs from node jerma-worker2 pod pod-b13c4b0a-6206-468d-9a0f-69fecee2714e container test-container: STEP: delete the pod May 6 22:54:30.892: INFO: Waiting for pod pod-b13c4b0a-6206-468d-9a0f-69fecee2714e to disappear May 6 22:54:30.928: INFO: Pod pod-b13c4b0a-6206-468d-9a0f-69fecee2714e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:54:30.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1383" for this suite. • [SLOW TEST:6.306 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":131,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:54:30.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-c73de592-f27a-4da2-8f04-07b4a6e7ef45 STEP: Creating a pod to test consume configMaps May 6 22:54:31.227: INFO: Waiting up to 5m0s for pod "pod-configmaps-0bf8e8bd-cf2d-42fd-8891-88ee00eee87c" in namespace "configmap-9553" to be "success or failure" May 6 22:54:31.238: INFO: Pod "pod-configmaps-0bf8e8bd-cf2d-42fd-8891-88ee00eee87c": Phase="Pending", Reason="", readiness=false. Elapsed: 11.168286ms May 6 22:54:33.260: INFO: Pod "pod-configmaps-0bf8e8bd-cf2d-42fd-8891-88ee00eee87c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033536024s May 6 22:54:35.270: INFO: Pod "pod-configmaps-0bf8e8bd-cf2d-42fd-8891-88ee00eee87c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043648614s May 6 22:54:37.275: INFO: Pod "pod-configmaps-0bf8e8bd-cf2d-42fd-8891-88ee00eee87c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047822055s STEP: Saw pod success May 6 22:54:37.275: INFO: Pod "pod-configmaps-0bf8e8bd-cf2d-42fd-8891-88ee00eee87c" satisfied condition "success or failure" May 6 22:54:37.278: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-0bf8e8bd-cf2d-42fd-8891-88ee00eee87c container configmap-volume-test: STEP: delete the pod May 6 22:54:37.326: INFO: Waiting for pod pod-configmaps-0bf8e8bd-cf2d-42fd-8891-88ee00eee87c to disappear May 6 22:54:37.330: INFO: Pod pod-configmaps-0bf8e8bd-cf2d-42fd-8891-88ee00eee87c no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:54:37.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9553" for this suite. • [SLOW TEST:6.414 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":132,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:54:37.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 22:54:37.470: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c3f9b0f-5b8c-463b-ac84-b4679cd08e0a" in namespace "downward-api-2370" to be "success or failure" May 6 22:54:37.474: INFO: Pod "downwardapi-volume-3c3f9b0f-5b8c-463b-ac84-b4679cd08e0a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.795653ms May 6 22:54:39.478: INFO: Pod "downwardapi-volume-3c3f9b0f-5b8c-463b-ac84-b4679cd08e0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007916303s May 6 22:54:41.482: INFO: Pod "downwardapi-volume-3c3f9b0f-5b8c-463b-ac84-b4679cd08e0a": Phase="Running", Reason="", readiness=true. Elapsed: 4.011576321s May 6 22:54:43.486: INFO: Pod "downwardapi-volume-3c3f9b0f-5b8c-463b-ac84-b4679cd08e0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015775226s STEP: Saw pod success May 6 22:54:43.486: INFO: Pod "downwardapi-volume-3c3f9b0f-5b8c-463b-ac84-b4679cd08e0a" satisfied condition "success or failure" May 6 22:54:43.489: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3c3f9b0f-5b8c-463b-ac84-b4679cd08e0a container client-container: STEP: delete the pod May 6 22:54:43.527: INFO: Waiting for pod downwardapi-volume-3c3f9b0f-5b8c-463b-ac84-b4679cd08e0a to disappear May 6 22:54:43.557: INFO: Pod downwardapi-volume-3c3f9b0f-5b8c-463b-ac84-b4679cd08e0a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:54:43.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2370" for this suite. • [SLOW TEST:6.215 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":137,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:54:43.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-878a0a62-4b18-4bca-8ece-18f1cc2450e7 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-878a0a62-4b18-4bca-8ece-18f1cc2450e7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:54:49.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-686" for this suite. • [SLOW TEST:6.178 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":140,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:54:49.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:54:49.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3063" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":13,"skipped":150,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:54:50.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0506 22:54:51.470419 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 22:54:51.470: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:54:51.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9237" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":14,"skipped":158,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:54:51.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 22:54:52.735: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 22:54:54.747: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402492, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402492, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402493, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402492, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:54:56.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402492, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402492, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402493, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402492, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:54:58.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402492, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402492, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402493, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402492, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:55:01.821: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:55:01.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3497" for this suite. STEP: Destroying namespace "webhook-3497-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.649 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":15,"skipped":164,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:55:02.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 6 22:55:02.169: INFO: namespace kubectl-6047 May 6 22:55:02.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6047' May 6 22:55:02.572: INFO: stderr: "" May 6 22:55:02.572: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 6 22:55:03.576: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:55:03.576: INFO: Found 0 / 1 May 6 22:55:04.577: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:55:04.577: INFO: Found 0 / 1 May 6 22:55:05.607: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:55:05.607: INFO: Found 1 / 1 May 6 22:55:05.607: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 6 22:55:05.631: INFO: Selector matched 1 pods for map[app:agnhost] May 6 22:55:05.631: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 6 22:55:05.631: INFO: wait on agnhost-master startup in kubectl-6047 May 6 22:55:05.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-2bdtn agnhost-master --namespace=kubectl-6047' May 6 22:55:05.744: INFO: stderr: "" May 6 22:55:05.744: INFO: stdout: "Paused\n" STEP: exposing RC May 6 22:55:05.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6047' May 6 22:55:05.955: INFO: stderr: "" May 6 22:55:05.955: INFO: stdout: "service/rm2 exposed\n" May 6 22:55:05.960: INFO: Service rm2 in namespace kubectl-6047 found. STEP: exposing service May 6 22:55:07.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6047' May 6 22:55:08.138: INFO: stderr: "" May 6 22:55:08.138: INFO: stdout: "service/rm3 exposed\n" May 6 22:55:08.146: INFO: Service rm3 in namespace kubectl-6047 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:55:10.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6047" for this suite. • [SLOW TEST:8.036 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":16,"skipped":174,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:55:10.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 22:55:11.459: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 22:55:13.470: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402511, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402511, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402511, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402511, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 22:55:15.669: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402511, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402511, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402511, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724402511, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 22:55:18.528: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 6 22:55:18.550: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:55:18.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8809" for this suite. STEP: Destroying namespace "webhook-8809-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.584 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":17,"skipped":187,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:55:18.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 6 22:55:18.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9357' May 6 22:55:19.542: INFO: stderr: "" May 6 22:55:19.542: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 22:55:19.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9357' May 6 22:55:19.766: INFO: stderr: "" May 6 22:55:19.766: INFO: stdout: "update-demo-nautilus-79d8l update-demo-nautilus-hcmzr " May 6 22:55:19.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79d8l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9357' May 6 22:55:19.965: INFO: stderr: "" May 6 22:55:19.965: INFO: stdout: "" May 6 22:55:19.965: INFO: update-demo-nautilus-79d8l is created but not running May 6 22:55:24.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9357' May 6 22:55:25.060: INFO: stderr: "" May 6 22:55:25.060: INFO: stdout: "update-demo-nautilus-79d8l update-demo-nautilus-hcmzr " May 6 22:55:25.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79d8l -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9357' May 6 22:55:25.154: INFO: stderr: "" May 6 22:55:25.154: INFO: stdout: "true" May 6 22:55:25.154: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-79d8l -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9357' May 6 22:55:25.249: INFO: stderr: "" May 6 22:55:25.249: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 22:55:25.249: INFO: validating pod update-demo-nautilus-79d8l May 6 22:55:25.261: INFO: got data: { "image": "nautilus.jpg" } May 6 22:55:25.261: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 22:55:25.261: INFO: update-demo-nautilus-79d8l is verified up and running May 6 22:55:25.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hcmzr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9357' May 6 22:55:25.352: INFO: stderr: "" May 6 22:55:25.352: INFO: stdout: "true" May 6 22:55:25.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hcmzr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9357' May 6 22:55:25.444: INFO: stderr: "" May 6 22:55:25.444: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 22:55:25.444: INFO: validating pod update-demo-nautilus-hcmzr May 6 22:55:25.449: INFO: got data: { "image": "nautilus.jpg" } May 6 22:55:25.449: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 22:55:25.449: INFO: update-demo-nautilus-hcmzr is verified up and running STEP: scaling down the replication controller May 6 22:55:25.451: INFO: scanned /root for discovery docs: May 6 22:55:25.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9357' May 6 22:55:26.569: INFO: stderr: "" May 6 22:55:26.569: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 22:55:26.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9357' May 6 22:55:26.666: INFO: stderr: "" May 6 22:55:26.666: INFO: stdout: "update-demo-nautilus-79d8l update-demo-nautilus-hcmzr " STEP: Replicas for name=update-demo: expected=1 actual=2 May 6 22:55:31.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9357' May 6 22:55:31.781: INFO: stderr: "" May 6 22:55:31.781: INFO: stdout: "update-demo-nautilus-79d8l update-demo-nautilus-hcmzr " STEP: Replicas for name=update-demo: expected=1 actual=2 May 6 22:55:36.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9357' May 6 22:55:36.900: INFO: stderr: "" May 6 22:55:36.900: INFO: stdout: "update-demo-nautilus-79d8l update-demo-nautilus-hcmzr " STEP: Replicas for name=update-demo: expected=1 actual=2 May 6 22:55:41.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9357' May 6 22:55:42.010: INFO: stderr: "" May 6 22:55:42.010: INFO: stdout: "update-demo-nautilus-hcmzr " May 6 22:55:42.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hcmzr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9357' May 6 22:55:42.108: INFO: stderr: "" May 6 22:55:42.108: INFO: stdout: "true" May 6 22:55:42.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hcmzr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9357' May 6 22:55:42.190: INFO: stderr: "" May 6 22:55:42.190: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 22:55:42.190: INFO: validating pod update-demo-nautilus-hcmzr May 6 22:55:42.192: INFO: got data: { "image": "nautilus.jpg" } May 6 22:55:42.192: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 22:55:42.192: INFO: update-demo-nautilus-hcmzr is verified up and running STEP: scaling up the replication controller May 6 22:55:42.195: INFO: scanned /root for discovery docs: May 6 22:55:42.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9357' May 6 22:55:43.583: INFO: stderr: "" May 6 22:55:43.583: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 22:55:43.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9357' May 6 22:55:43.673: INFO: stderr: "" May 6 22:55:43.673: INFO: stdout: "update-demo-nautilus-d4xqj update-demo-nautilus-hcmzr " May 6 22:55:43.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d4xqj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9357' May 6 22:55:43.772: INFO: stderr: "" May 6 22:55:43.772: INFO: stdout: "" May 6 22:55:43.772: INFO: update-demo-nautilus-d4xqj is created but not running May 6 22:55:48.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9357' May 6 22:55:48.878: INFO: stderr: "" May 6 22:55:48.878: INFO: stdout: "update-demo-nautilus-d4xqj update-demo-nautilus-hcmzr " May 6 22:55:48.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d4xqj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9357' May 6 22:55:48.983: INFO: stderr: "" May 6 22:55:48.983: INFO: stdout: "true" May 6 22:55:48.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-d4xqj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9357' May 6 22:55:49.084: INFO: stderr: "" May 6 22:55:49.084: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 22:55:49.084: INFO: validating pod update-demo-nautilus-d4xqj May 6 22:55:49.088: INFO: got data: { "image": "nautilus.jpg" } May 6 22:55:49.089: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 22:55:49.089: INFO: update-demo-nautilus-d4xqj is verified up and running May 6 22:55:49.089: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hcmzr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9357' May 6 22:55:49.218: INFO: stderr: "" May 6 22:55:49.218: INFO: stdout: "true" May 6 22:55:49.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hcmzr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9357' May 6 22:55:49.315: INFO: stderr: "" May 6 22:55:49.315: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 22:55:49.315: INFO: validating pod update-demo-nautilus-hcmzr May 6 22:55:49.331: INFO: got data: { "image": "nautilus.jpg" } May 6 22:55:49.331: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 22:55:49.331: INFO: update-demo-nautilus-hcmzr is verified up and running STEP: using delete to clean up resources May 6 22:55:49.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9357' May 6 22:55:49.455: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 22:55:49.455: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 6 22:55:49.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9357' May 6 22:55:49.558: INFO: stderr: "No resources found in kubectl-9357 namespace.\n" May 6 22:55:49.558: INFO: stdout: "" May 6 22:55:49.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9357 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 22:55:49.661: INFO: stderr: "" May 6 22:55:49.661: INFO: stdout: "update-demo-nautilus-d4xqj\nupdate-demo-nautilus-hcmzr\n" May 6 22:55:50.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9357' May 6 22:55:50.258: INFO: stderr: "No resources found in kubectl-9357 namespace.\n" May 6 22:55:50.258: INFO: stdout: "" May 6 22:55:50.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9357 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 22:55:50.381: INFO: stderr: "" May 6 22:55:50.381: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:55:50.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9357" for this suite. • [SLOW TEST:31.641 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":18,"skipped":199,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:55:50.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 6 22:55:58.995: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 22:55:59.027: INFO: Pod pod-with-prestop-http-hook still exists May 6 22:56:01.027: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 22:56:01.031: INFO: Pod pod-with-prestop-http-hook still exists May 6 22:56:03.027: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 22:56:03.031: INFO: Pod pod-with-prestop-http-hook still exists May 6 22:56:05.027: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 22:56:05.034: INFO: Pod pod-with-prestop-http-hook still exists May 6 22:56:07.027: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 22:56:07.031: INFO: Pod pod-with-prestop-http-hook still exists May 6 22:56:09.027: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 22:56:09.031: INFO: Pod pod-with-prestop-http-hook still exists May 6 22:56:11.027: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 6 22:56:11.031: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:56:11.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4992" for this suite. • [SLOW TEST:20.658 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":223,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:56:11.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 6 22:56:11.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-127' May 6 22:56:11.459: INFO: stderr: "" May 6 22:56:11.459: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 22:56:11.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-127' May 6 22:56:11.614: INFO: stderr: "" May 6 22:56:11.614: INFO: stdout: "update-demo-nautilus-4bbmt update-demo-nautilus-54s9d " May 6 22:56:11.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4bbmt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-127' May 6 22:56:11.731: INFO: stderr: "" May 6 22:56:11.731: INFO: stdout: "" May 6 22:56:11.731: INFO: update-demo-nautilus-4bbmt is created but not running May 6 22:56:16.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-127' May 6 22:56:16.834: INFO: stderr: "" May 6 22:56:16.834: INFO: stdout: "update-demo-nautilus-4bbmt update-demo-nautilus-54s9d " May 6 22:56:16.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4bbmt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-127' May 6 22:56:16.929: INFO: stderr: "" May 6 22:56:16.929: INFO: stdout: "true" May 6 22:56:16.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4bbmt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-127' May 6 22:56:17.022: INFO: stderr: "" May 6 22:56:17.022: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 22:56:17.022: INFO: validating pod update-demo-nautilus-4bbmt May 6 22:56:17.026: INFO: got data: { "image": "nautilus.jpg" } May 6 22:56:17.026: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 22:56:17.026: INFO: update-demo-nautilus-4bbmt is verified up and running May 6 22:56:17.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54s9d -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-127' May 6 22:56:17.121: INFO: stderr: "" May 6 22:56:17.121: INFO: stdout: "true" May 6 22:56:17.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-54s9d -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-127' May 6 22:56:17.221: INFO: stderr: "" May 6 22:56:17.221: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 22:56:17.222: INFO: validating pod update-demo-nautilus-54s9d May 6 22:56:17.225: INFO: got data: { "image": "nautilus.jpg" } May 6 22:56:17.226: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 22:56:17.226: INFO: update-demo-nautilus-54s9d is verified up and running STEP: using delete to clean up resources May 6 22:56:17.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-127' May 6 22:56:17.320: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 22:56:17.320: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 6 22:56:17.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-127' May 6 22:56:17.412: INFO: stderr: "No resources found in kubectl-127 namespace.\n" May 6 22:56:17.412: INFO: stdout: "" May 6 22:56:17.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-127 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 22:56:17.517: INFO: stderr: "" May 6 22:56:17.517: INFO: stdout: "update-demo-nautilus-4bbmt\nupdate-demo-nautilus-54s9d\n" May 6 22:56:18.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-127' May 6 22:56:18.616: INFO: stderr: "No resources found in kubectl-127 namespace.\n" May 6 22:56:18.616: INFO: stdout: "" May 6 22:56:18.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-127 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 22:56:18.739: INFO: stderr: "" May 6 22:56:18.739: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:56:18.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-127" for this suite. • [SLOW TEST:7.720 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":20,"skipped":236,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:56:18.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 6 22:56:18.839: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 22:56:18.872: INFO: Waiting for terminating namespaces to be deleted... May 6 22:56:18.874: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 6 22:56:18.880: INFO: update-demo-nautilus-54s9d from kubectl-127 started at 2020-05-06 22:56:11 +0000 UTC (1 container statuses recorded) May 6 22:56:18.880: INFO: Container update-demo ready: true, restart count 0 May 6 22:56:18.880: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 22:56:18.880: INFO: Container kindnet-cni ready: true, restart count 0 May 6 22:56:18.880: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 22:56:18.880: INFO: Container kube-proxy ready: true, restart count 0 May 6 22:56:18.880: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 6 22:56:18.887: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 6 22:56:18.887: INFO: Container kube-bench ready: false, restart count 0 May 6 22:56:18.887: INFO: pod-handle-http-request from container-lifecycle-hook-4992 started at 2020-05-06 22:55:50 +0000 UTC (1 container statuses recorded) May 6 22:56:18.887: INFO: Container pod-handle-http-request ready: false, restart count 0 May 6 22:56:18.887: INFO: update-demo-nautilus-4bbmt from kubectl-127 started at 2020-05-06 22:56:11 +0000 UTC (1 container statuses recorded) May 6 22:56:18.887: INFO: Container update-demo ready: true, restart count 0 May 6 22:56:18.887: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 22:56:18.887: INFO: Container kindnet-cni ready: true, restart count 0 May 6 22:56:18.887: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 22:56:18.887: INFO: Container kube-proxy ready: true, restart count 0 May 6 22:56:18.887: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 6 22:56:18.887: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d75ddada-eade-428a-a8fc-606bc6fa651c 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-d75ddada-eade-428a-a8fc-606bc6fa651c off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-d75ddada-eade-428a-a8fc-606bc6fa651c [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:56:27.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2647" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.490 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":21,"skipped":237,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:56:27.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7525.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7525.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7525.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7525.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7525.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7525.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 22:56:33.488: INFO: DNS probes using dns-7525/dns-test-205b5e01-eedf-4ad8-a893-21453c53a7b1 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:56:33.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7525" for this suite. • [SLOW TEST:6.291 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":22,"skipped":249,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:56:33.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-sbr4 STEP: Creating a pod to test atomic-volume-subpath May 6 22:56:33.911: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-sbr4" in namespace "subpath-1426" to be "success or failure" May 6 22:56:33.947: INFO: Pod "pod-subpath-test-projected-sbr4": Phase="Pending", Reason="", readiness=false. Elapsed: 36.572113ms May 6 22:56:36.104: INFO: Pod "pod-subpath-test-projected-sbr4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.193616066s May 6 22:56:38.109: INFO: Pod "pod-subpath-test-projected-sbr4": Phase="Running", Reason="", readiness=true. Elapsed: 4.198697009s May 6 22:56:40.114: INFO: Pod "pod-subpath-test-projected-sbr4": Phase="Running", Reason="", readiness=true. Elapsed: 6.203191374s May 6 22:56:42.118: INFO: Pod "pod-subpath-test-projected-sbr4": Phase="Running", Reason="", readiness=true. Elapsed: 8.207722727s May 6 22:56:44.123: INFO: Pod "pod-subpath-test-projected-sbr4": Phase="Running", Reason="", readiness=true. Elapsed: 10.212251977s May 6 22:56:46.128: INFO: Pod "pod-subpath-test-projected-sbr4": Phase="Running", Reason="", readiness=true. Elapsed: 12.21689636s May 6 22:56:48.132: INFO: Pod "pod-subpath-test-projected-sbr4": Phase="Running", Reason="", readiness=true. Elapsed: 14.221165778s May 6 22:56:50.136: INFO: Pod "pod-subpath-test-projected-sbr4": Phase="Running", Reason="", readiness=true. Elapsed: 16.225369472s May 6 22:56:52.140: INFO: Pod "pod-subpath-test-projected-sbr4": Phase="Running", Reason="", readiness=true. Elapsed: 18.229764339s May 6 22:56:54.146: INFO: Pod "pod-subpath-test-projected-sbr4": Phase="Running", Reason="", readiness=true. Elapsed: 20.235205392s May 6 22:56:56.150: INFO: Pod "pod-subpath-test-projected-sbr4": Phase="Running", Reason="", readiness=true. Elapsed: 22.239667325s May 6 22:56:58.155: INFO: Pod "pod-subpath-test-projected-sbr4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.244119443s STEP: Saw pod success May 6 22:56:58.155: INFO: Pod "pod-subpath-test-projected-sbr4" satisfied condition "success or failure" May 6 22:56:58.158: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-sbr4 container test-container-subpath-projected-sbr4: STEP: delete the pod May 6 22:56:58.179: INFO: Waiting for pod pod-subpath-test-projected-sbr4 to disappear May 6 22:56:58.183: INFO: Pod pod-subpath-test-projected-sbr4 no longer exists STEP: Deleting pod pod-subpath-test-projected-sbr4 May 6 22:56:58.183: INFO: Deleting pod "pod-subpath-test-projected-sbr4" in namespace "subpath-1426" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:56:58.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1426" for this suite. • [SLOW TEST:24.643 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":23,"skipped":298,"failed":0} SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:56:58.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7979 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-7979 May 6 22:56:58.555: INFO: Found 0 stateful pods, waiting for 1 May 6 22:57:08.560: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 6 22:57:08.586: INFO: Deleting all statefulset in ns statefulset-7979 May 6 22:57:08.611: INFO: Scaling statefulset ss to 0 May 6 22:57:38.722: INFO: Waiting for statefulset status.replicas updated to 0 May 6 22:57:38.724: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:57:38.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7979" for this suite. • [SLOW TEST:40.551 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":24,"skipped":300,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:57:38.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-97088e9e-97dd-43c6-8242-49eacb8e011b STEP: Creating a pod to test consume configMaps May 6 22:57:38.866: INFO: Waiting up to 5m0s for pod "pod-configmaps-30701ad7-3774-4184-803e-dd0c5ebe6aa2" in namespace "configmap-8140" to be "success or failure" May 6 22:57:38.920: INFO: Pod "pod-configmaps-30701ad7-3774-4184-803e-dd0c5ebe6aa2": Phase="Pending", Reason="", readiness=false. Elapsed: 53.555123ms May 6 22:57:40.924: INFO: Pod "pod-configmaps-30701ad7-3774-4184-803e-dd0c5ebe6aa2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057758983s May 6 22:57:42.928: INFO: Pod "pod-configmaps-30701ad7-3774-4184-803e-dd0c5ebe6aa2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061801284s STEP: Saw pod success May 6 22:57:42.928: INFO: Pod "pod-configmaps-30701ad7-3774-4184-803e-dd0c5ebe6aa2" satisfied condition "success or failure" May 6 22:57:42.931: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-30701ad7-3774-4184-803e-dd0c5ebe6aa2 container configmap-volume-test: STEP: delete the pod May 6 22:57:42.969: INFO: Waiting for pod pod-configmaps-30701ad7-3774-4184-803e-dd0c5ebe6aa2 to disappear May 6 22:57:42.976: INFO: Pod pod-configmaps-30701ad7-3774-4184-803e-dd0c5ebe6aa2 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:57:42.976: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8140" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":334,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:57:42.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-6a367247-8862-49db-984e-29676660e033 STEP: Creating a pod to test consume secrets May 6 22:57:43.095: INFO: Waiting up to 5m0s for pod "pod-secrets-ab79e935-a25a-4a61-80e9-65fa89748c71" in namespace "secrets-7444" to be "success or failure" May 6 22:57:43.116: INFO: Pod "pod-secrets-ab79e935-a25a-4a61-80e9-65fa89748c71": Phase="Pending", Reason="", readiness=false. Elapsed: 21.68318ms May 6 22:57:45.121: INFO: Pod "pod-secrets-ab79e935-a25a-4a61-80e9-65fa89748c71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026195987s May 6 22:57:47.124: INFO: Pod "pod-secrets-ab79e935-a25a-4a61-80e9-65fa89748c71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02931929s STEP: Saw pod success May 6 22:57:47.124: INFO: Pod "pod-secrets-ab79e935-a25a-4a61-80e9-65fa89748c71" satisfied condition "success or failure" May 6 22:57:47.126: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-ab79e935-a25a-4a61-80e9-65fa89748c71 container secret-volume-test: STEP: delete the pod May 6 22:57:47.146: INFO: Waiting for pod pod-secrets-ab79e935-a25a-4a61-80e9-65fa89748c71 to disappear May 6 22:57:47.150: INFO: Pod pod-secrets-ab79e935-a25a-4a61-80e9-65fa89748c71 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:57:47.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7444" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":336,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:57:47.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 6 22:57:55.286: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 22:57:55.289: INFO: Pod pod-with-prestop-exec-hook still exists May 6 22:57:57.289: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 22:57:57.294: INFO: Pod pod-with-prestop-exec-hook still exists May 6 22:57:59.289: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 22:57:59.293: INFO: Pod pod-with-prestop-exec-hook still exists May 6 22:58:01.289: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 22:58:01.293: INFO: Pod pod-with-prestop-exec-hook still exists May 6 22:58:03.289: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 22:58:03.300: INFO: Pod pod-with-prestop-exec-hook still exists May 6 22:58:05.289: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 22:58:05.292: INFO: Pod pod-with-prestop-exec-hook still exists May 6 22:58:07.289: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 22:58:07.293: INFO: Pod pod-with-prestop-exec-hook still exists May 6 22:58:09.289: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 6 22:58:09.293: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:58:09.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-104" for this suite. • [SLOW TEST:22.175 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":345,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:58:09.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 6 22:58:09.384: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:58:17.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9338" for this suite. • [SLOW TEST:8.235 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":28,"skipped":363,"failed":0} [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:58:17.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-fab1adb7-8608-4e24-ae65-61063c9e1724 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-fab1adb7-8608-4e24-ae65-61063c9e1724 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:59:44.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-807" for this suite. • [SLOW TEST:87.398 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":363,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:59:44.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 6 22:59:45.380: INFO: Waiting up to 5m0s for pod "pod-01ba3d53-04ed-4aa7-aa28-9cc3b37d1776" in namespace "emptydir-8079" to be "success or failure" May 6 22:59:46.023: INFO: Pod "pod-01ba3d53-04ed-4aa7-aa28-9cc3b37d1776": Phase="Pending", Reason="", readiness=false. Elapsed: 642.373524ms May 6 22:59:48.026: INFO: Pod "pod-01ba3d53-04ed-4aa7-aa28-9cc3b37d1776": Phase="Pending", Reason="", readiness=false. Elapsed: 2.645359085s May 6 22:59:50.167: INFO: Pod "pod-01ba3d53-04ed-4aa7-aa28-9cc3b37d1776": Phase="Pending", Reason="", readiness=false. Elapsed: 4.786458455s May 6 22:59:52.526: INFO: Pod "pod-01ba3d53-04ed-4aa7-aa28-9cc3b37d1776": Phase="Pending", Reason="", readiness=false. Elapsed: 7.145679435s May 6 22:59:54.599: INFO: Pod "pod-01ba3d53-04ed-4aa7-aa28-9cc3b37d1776": Phase="Pending", Reason="", readiness=false. Elapsed: 9.218804168s May 6 22:59:56.603: INFO: Pod "pod-01ba3d53-04ed-4aa7-aa28-9cc3b37d1776": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.222018162s STEP: Saw pod success May 6 22:59:56.603: INFO: Pod "pod-01ba3d53-04ed-4aa7-aa28-9cc3b37d1776" satisfied condition "success or failure" May 6 22:59:56.605: INFO: Trying to get logs from node jerma-worker pod pod-01ba3d53-04ed-4aa7-aa28-9cc3b37d1776 container test-container: STEP: delete the pod May 6 22:59:56.946: INFO: Waiting for pod pod-01ba3d53-04ed-4aa7-aa28-9cc3b37d1776 to disappear May 6 22:59:57.101: INFO: Pod pod-01ba3d53-04ed-4aa7-aa28-9cc3b37d1776 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 22:59:57.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8079" for this suite. • [SLOW TEST:12.143 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":380,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 22:59:57.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-b63ca6ce-73bf-45bc-b877-fd227c3d04a2 in namespace container-probe-7016 May 6 23:00:09.779: INFO: Started pod liveness-b63ca6ce-73bf-45bc-b877-fd227c3d04a2 in namespace container-probe-7016 STEP: checking the pod's current state and verifying that restartCount is present May 6 23:00:09.781: INFO: Initial restart count of pod liveness-b63ca6ce-73bf-45bc-b877-fd227c3d04a2 is 0 May 6 23:00:31.871: INFO: Restart count of pod container-probe-7016/liveness-b63ca6ce-73bf-45bc-b877-fd227c3d04a2 is now 1 (22.090000164s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:00:31.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7016" for this suite. • [SLOW TEST:35.181 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":391,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:00:32.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 6 23:00:32.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-3855' May 6 23:00:32.643: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 6 23:00:32.643: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 May 6 23:00:37.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-3855' May 6 23:00:37.244: INFO: stderr: "" May 6 23:00:37.244: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:00:37.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3855" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":32,"skipped":405,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:00:37.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-17b24d77-d36d-403c-b22f-956c2b20876a STEP: Creating configMap with name cm-test-opt-upd-f71f6e18-5f86-4482-874d-9c32c99682e1 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-17b24d77-d36d-403c-b22f-956c2b20876a STEP: Updating configmap cm-test-opt-upd-f71f6e18-5f86-4482-874d-9c32c99682e1 STEP: Creating configMap with name cm-test-opt-create-a906de50-e5d6-4559-bd2c-71cc96678f59 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:02:13.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6454" for this suite. • [SLOW TEST:96.133 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":437,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:02:13.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:02:13.635: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 6 23:02:16.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-74 create -f -' May 6 23:02:24.026: INFO: stderr: "" May 6 23:02:24.026: INFO: stdout: "e2e-test-crd-publish-openapi-2771-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 6 23:02:24.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-74 delete e2e-test-crd-publish-openapi-2771-crds test-foo' May 6 23:02:24.220: INFO: stderr: "" May 6 23:02:24.220: INFO: stdout: "e2e-test-crd-publish-openapi-2771-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 6 23:02:24.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-74 apply -f -' May 6 23:02:24.490: INFO: stderr: "" May 6 23:02:24.490: INFO: stdout: "e2e-test-crd-publish-openapi-2771-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 6 23:02:24.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-74 delete e2e-test-crd-publish-openapi-2771-crds test-foo' May 6 23:02:24.615: INFO: stderr: "" May 6 23:02:24.615: INFO: stdout: "e2e-test-crd-publish-openapi-2771-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 6 23:02:24.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-74 create -f -' May 6 23:02:24.854: INFO: rc: 1 May 6 23:02:24.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-74 apply -f -' May 6 23:02:25.118: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 6 23:02:25.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-74 create -f -' May 6 23:02:25.355: INFO: rc: 1 May 6 23:02:25.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-74 apply -f -' May 6 23:02:25.603: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 6 23:02:25.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2771-crds' May 6 23:02:25.841: INFO: stderr: "" May 6 23:02:25.841: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2771-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 6 23:02:25.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2771-crds.metadata' May 6 23:02:26.119: INFO: stderr: "" May 6 23:02:26.119: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2771-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 6 23:02:26.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2771-crds.spec' May 6 23:02:26.375: INFO: stderr: "" May 6 23:02:26.375: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2771-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 6 23:02:26.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2771-crds.spec.bars' May 6 23:02:26.679: INFO: stderr: "" May 6 23:02:26.679: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2771-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 6 23:02:26.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2771-crds.spec.bars2' May 6 23:02:26.937: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:02:28.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-74" for this suite. • [SLOW TEST:15.446 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":34,"skipped":441,"failed":0} SSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:02:28.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:02:33.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1123" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":35,"skipped":445,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:02:33.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 6 23:02:33.704: INFO: Waiting up to 5m0s for pod "pod-78bf3e4e-5d0a-4e86-86b2-a50de55995b8" in namespace "emptydir-9709" to be "success or failure" May 6 23:02:33.714: INFO: Pod "pod-78bf3e4e-5d0a-4e86-86b2-a50de55995b8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.40279ms May 6 23:02:35.815: INFO: Pod "pod-78bf3e4e-5d0a-4e86-86b2-a50de55995b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111586567s May 6 23:02:37.820: INFO: Pod "pod-78bf3e4e-5d0a-4e86-86b2-a50de55995b8": Phase="Running", Reason="", readiness=true. Elapsed: 4.116082569s May 6 23:02:39.823: INFO: Pod "pod-78bf3e4e-5d0a-4e86-86b2-a50de55995b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.118929251s STEP: Saw pod success May 6 23:02:39.823: INFO: Pod "pod-78bf3e4e-5d0a-4e86-86b2-a50de55995b8" satisfied condition "success or failure" May 6 23:02:39.824: INFO: Trying to get logs from node jerma-worker2 pod pod-78bf3e4e-5d0a-4e86-86b2-a50de55995b8 container test-container: STEP: delete the pod May 6 23:02:39.917: INFO: Waiting for pod pod-78bf3e4e-5d0a-4e86-86b2-a50de55995b8 to disappear May 6 23:02:39.922: INFO: Pod pod-78bf3e4e-5d0a-4e86-86b2-a50de55995b8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:02:39.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9709" for this suite. • [SLOW TEST:6.344 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":459,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:02:39.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 6 23:02:46.592: INFO: Successfully updated pod "pod-update-activedeadlineseconds-3b4ec5a2-d022-46e7-8261-6d528c0c792a" May 6 23:02:46.593: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-3b4ec5a2-d022-46e7-8261-6d528c0c792a" in namespace "pods-8375" to be "terminated due to deadline exceeded" May 6 23:02:46.599: INFO: Pod "pod-update-activedeadlineseconds-3b4ec5a2-d022-46e7-8261-6d528c0c792a": Phase="Running", Reason="", readiness=true. Elapsed: 6.870537ms May 6 23:02:48.604: INFO: Pod "pod-update-activedeadlineseconds-3b4ec5a2-d022-46e7-8261-6d528c0c792a": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.011176097s May 6 23:02:48.604: INFO: Pod "pod-update-activedeadlineseconds-3b4ec5a2-d022-46e7-8261-6d528c0c792a" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:02:48.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8375" for this suite. • [SLOW TEST:8.677 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":464,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:02:48.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:03:00.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3049" for this suite. • [SLOW TEST:11.410 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":38,"skipped":473,"failed":0} SSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:03:00.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1517.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1517.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1517.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1517.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 23:03:08.290: INFO: DNS probes using dns-test-6a9a5fbf-d287-46bd-96dd-e604698f6e6c succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1517.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1517.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1517.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1517.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 23:03:16.422: INFO: File wheezy_udp@dns-test-service-3.dns-1517.svc.cluster.local from pod dns-1517/dns-test-65f98024-a920-49a1-8fe1-0dcc0e3ad212 contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 23:03:16.426: INFO: File jessie_udp@dns-test-service-3.dns-1517.svc.cluster.local from pod dns-1517/dns-test-65f98024-a920-49a1-8fe1-0dcc0e3ad212 contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 23:03:16.426: INFO: Lookups using dns-1517/dns-test-65f98024-a920-49a1-8fe1-0dcc0e3ad212 failed for: [wheezy_udp@dns-test-service-3.dns-1517.svc.cluster.local jessie_udp@dns-test-service-3.dns-1517.svc.cluster.local] May 6 23:03:21.469: INFO: File wheezy_udp@dns-test-service-3.dns-1517.svc.cluster.local from pod dns-1517/dns-test-65f98024-a920-49a1-8fe1-0dcc0e3ad212 contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 23:03:21.472: INFO: File jessie_udp@dns-test-service-3.dns-1517.svc.cluster.local from pod dns-1517/dns-test-65f98024-a920-49a1-8fe1-0dcc0e3ad212 contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 23:03:21.472: INFO: Lookups using dns-1517/dns-test-65f98024-a920-49a1-8fe1-0dcc0e3ad212 failed for: [wheezy_udp@dns-test-service-3.dns-1517.svc.cluster.local jessie_udp@dns-test-service-3.dns-1517.svc.cluster.local] May 6 23:03:26.430: INFO: File wheezy_udp@dns-test-service-3.dns-1517.svc.cluster.local from pod dns-1517/dns-test-65f98024-a920-49a1-8fe1-0dcc0e3ad212 contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 23:03:26.434: INFO: File jessie_udp@dns-test-service-3.dns-1517.svc.cluster.local from pod dns-1517/dns-test-65f98024-a920-49a1-8fe1-0dcc0e3ad212 contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 23:03:26.434: INFO: Lookups using dns-1517/dns-test-65f98024-a920-49a1-8fe1-0dcc0e3ad212 failed for: [wheezy_udp@dns-test-service-3.dns-1517.svc.cluster.local jessie_udp@dns-test-service-3.dns-1517.svc.cluster.local] May 6 23:03:31.430: INFO: File wheezy_udp@dns-test-service-3.dns-1517.svc.cluster.local from pod dns-1517/dns-test-65f98024-a920-49a1-8fe1-0dcc0e3ad212 contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 23:03:31.433: INFO: File jessie_udp@dns-test-service-3.dns-1517.svc.cluster.local from pod dns-1517/dns-test-65f98024-a920-49a1-8fe1-0dcc0e3ad212 contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 23:03:31.433: INFO: Lookups using dns-1517/dns-test-65f98024-a920-49a1-8fe1-0dcc0e3ad212 failed for: [wheezy_udp@dns-test-service-3.dns-1517.svc.cluster.local jessie_udp@dns-test-service-3.dns-1517.svc.cluster.local] May 6 23:03:36.433: INFO: File jessie_udp@dns-test-service-3.dns-1517.svc.cluster.local from pod dns-1517/dns-test-65f98024-a920-49a1-8fe1-0dcc0e3ad212 contains 'foo.example.com. ' instead of 'bar.example.com.' May 6 23:03:36.433: INFO: Lookups using dns-1517/dns-test-65f98024-a920-49a1-8fe1-0dcc0e3ad212 failed for: [jessie_udp@dns-test-service-3.dns-1517.svc.cluster.local] May 6 23:03:41.435: INFO: DNS probes using dns-test-65f98024-a920-49a1-8fe1-0dcc0e3ad212 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1517.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1517.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1517.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1517.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 23:03:51.889: INFO: DNS probes using dns-test-ff96e0a8-58ef-4714-a672-a5875e04fd65 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:03:51.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1517" for this suite. • [SLOW TEST:52.000 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":39,"skipped":480,"failed":0} [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:03:52.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-999cc99f-ac1e-489d-a13d-7c4b636c67f0 STEP: Creating a pod to test consume secrets May 6 23:03:52.477: INFO: Waiting up to 5m0s for pod "pod-secrets-7978e083-8e36-4095-8233-cd6e820a2bf6" in namespace "secrets-2092" to be "success or failure" May 6 23:03:52.518: INFO: Pod "pod-secrets-7978e083-8e36-4095-8233-cd6e820a2bf6": Phase="Pending", Reason="", readiness=false. Elapsed: 40.970647ms May 6 23:03:54.521: INFO: Pod "pod-secrets-7978e083-8e36-4095-8233-cd6e820a2bf6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044626172s May 6 23:03:56.525: INFO: Pod "pod-secrets-7978e083-8e36-4095-8233-cd6e820a2bf6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048480221s STEP: Saw pod success May 6 23:03:56.525: INFO: Pod "pod-secrets-7978e083-8e36-4095-8233-cd6e820a2bf6" satisfied condition "success or failure" May 6 23:03:56.528: INFO: Trying to get logs from node jerma-worker pod pod-secrets-7978e083-8e36-4095-8233-cd6e820a2bf6 container secret-volume-test: STEP: delete the pod May 6 23:03:56.613: INFO: Waiting for pod pod-secrets-7978e083-8e36-4095-8233-cd6e820a2bf6 to disappear May 6 23:03:56.637: INFO: Pod pod-secrets-7978e083-8e36-4095-8233-cd6e820a2bf6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:03:56.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2092" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":480,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:03:56.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:03:56.680: INFO: Creating deployment "webserver-deployment" May 6 23:03:56.698: INFO: Waiting for observed generation 1 May 6 23:03:58.763: INFO: Waiting for all required pods to come up May 6 23:03:58.807: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 6 23:04:10.828: INFO: Waiting for deployment "webserver-deployment" to complete May 6 23:04:10.834: INFO: Updating deployment "webserver-deployment" with a non-existent image May 6 23:04:10.840: INFO: Updating deployment webserver-deployment May 6 23:04:10.840: INFO: Waiting for observed generation 2 May 6 23:04:13.110: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 6 23:04:13.117: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 6 23:04:13.119: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 6 23:04:13.124: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 6 23:04:13.124: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 6 23:04:13.126: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 6 23:04:13.129: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 6 23:04:13.129: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 6 23:04:13.133: INFO: Updating deployment webserver-deployment May 6 23:04:13.133: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 6 23:04:13.271: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 6 23:04:13.285: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 6 23:04:13.543: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-7592 /apis/apps/v1/namespaces/deployment-7592/deployments/webserver-deployment c7213574-e558-42f2-b921-6fe4ab181cfa 14020634 3 2020-05-06 23:03:56 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002bf6aa8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-05-06 23:04:11 +0000 UTC,LastTransitionTime:2020-05-06 23:03:56 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-06 23:04:13 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 6 23:04:13.687: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-7592 /apis/apps/v1/namespaces/deployment-7592/replicasets/webserver-deployment-c7997dcc8 20387fae-242b-4133-8327-3969d10ddaa4 14020675 3 2020-05-06 23:04:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment c7213574-e558-42f2-b921-6fe4ab181cfa 0xc002bf7027 0xc002bf7028}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002bf7098 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 23:04:13.687: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 6 23:04:13.687: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-7592 /apis/apps/v1/namespaces/deployment-7592/replicasets/webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 14020661 3 2020-05-06 23:03:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment c7213574-e558-42f2-b921-6fe4ab181cfa 0xc002bf6f67 0xc002bf6f68}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002bf6fc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 6 23:04:13.815: INFO: Pod "webserver-deployment-595b5b9587-5f8xm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5f8xm webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-5f8xm c0226275-5b9f-4c0d-a144-785d81a9a844 14020524 0 2020-05-06 23:03:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002bf76d7 0xc002bf76d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:03:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:03:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.49,StartTime:2020-05-06 23:03:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 23:04:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1662090dd95f7a8e165dd3562b3423290816b3ce47e330e03f72fbff2cfc9f91,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.49,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.816: INFO: Pod "webserver-deployment-595b5b9587-6rmhd" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6rmhd webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-6rmhd 05076d94-e76b-413e-9a07-2a8e39e8ecc9 14020674 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002bf7867 0xc002bf7868}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-06 23:04:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.816: INFO: Pod "webserver-deployment-595b5b9587-6wwbl" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6wwbl webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-6wwbl 670997f9-be13-411e-9254-f6c68b4401b8 14020512 0 2020-05-06 23:03:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002bf7a37 0xc002bf7a38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:03:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:03:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.48,StartTime:2020-05-06 23:03:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 23:04:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7a259cae26b81eada6b220d2e930e9403ca480ec53843ca63403eca5b91260bb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.48,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.816: INFO: Pod "webserver-deployment-595b5b9587-77tlm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-77tlm webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-77tlm 9670b03a-dc60-486b-9292-7ee00faa6675 14020655 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002bf7bc7 0xc002bf7bc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.816: INFO: Pod "webserver-deployment-595b5b9587-7s8sg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7s8sg webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-7s8sg ffd9da5d-d2af-4a21-8676-f2a7a898d44b 14020653 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002bf7d67 0xc002bf7d68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.816: INFO: Pod "webserver-deployment-595b5b9587-bjsv5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bjsv5 webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-bjsv5 3ba760a4-52d9-4cc9-ba63-486f276201b4 14020645 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002bf7ea7 0xc002bf7ea8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.817: INFO: Pod "webserver-deployment-595b5b9587-cnvwb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cnvwb webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-cnvwb 0e796795-f22c-4ac3-a104-cb4edf072107 14020654 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002c24017 0xc002c24018}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.817: INFO: Pod "webserver-deployment-595b5b9587-ct9cn" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ct9cn webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-ct9cn f5d42159-25ea-43d3-8417-34ba22a8707c 14020520 0 2020-05-06 23:03:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002c24137 0xc002c24138}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:03:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:03:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.218,StartTime:2020-05-06 23:03:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 23:04:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b8557535c2c27e5050e39c87e52eee56aea42730b0488190d89e64f43ed16243,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.218,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.817: INFO: Pod "webserver-deployment-595b5b9587-jzfbw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jzfbw webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-jzfbw a8b81cb2-a398-44c2-bb08-54a439faef09 14020667 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002c242b7 0xc002c242b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.817: INFO: Pod "webserver-deployment-595b5b9587-n984t" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-n984t webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-n984t 057d3737-116b-4e82-b928-1a0383a5c46a 14020482 0 2020-05-06 23:03:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002c243d7 0xc002c243d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:03:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:03:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.217,StartTime:2020-05-06 23:03:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 23:04:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e2b79b3694cf513456923982f2386ba3e3edbd7e8ff4c4569415b96cd5b5b8f3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.217,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.817: INFO: Pod "webserver-deployment-595b5b9587-pbz87" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pbz87 webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-pbz87 74742cf0-5ac1-459d-a49a-aa7889f722dd 14020678 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002c24557 0xc002c24558}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-06 23:04:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.818: INFO: Pod "webserver-deployment-595b5b9587-rqlp6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rqlp6 webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-rqlp6 1a7981d5-0eb9-4a8a-b127-225fa5fd6834 14020540 0 2020-05-06 23:03:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002c246b7 0xc002c246b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:03:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:03:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.220,StartTime:2020-05-06 23:03:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 23:04:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b8f57782345de3b403686fa136396ce1d29e13b58432ac3eade1b597793f995d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.220,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.818: INFO: Pod "webserver-deployment-595b5b9587-s8qxl" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-s8qxl webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-s8qxl f36f2f18-a1d3-43ac-b91c-b17a0afd96d3 14020544 0 2020-05-06 23:03:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002c24837 0xc002c24838}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:03:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:03:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.221,StartTime:2020-05-06 23:03:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 23:04:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://598fef0aedb17311efca0b5f712512c464ae8ffabf445ae581cb1f88617896f3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.221,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.818: INFO: Pod "webserver-deployment-595b5b9587-sfc69" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sfc69 webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-sfc69 dcb927b9-de4a-4e7f-b8d3-a5db8918bbb2 14020669 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002c249b7 0xc002c249b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.819: INFO: Pod "webserver-deployment-595b5b9587-tsjc4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tsjc4 webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-tsjc4 e8646723-7d67-4f16-acf6-168464904a9b 14020668 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002c24ad7 0xc002c24ad8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.819: INFO: Pod "webserver-deployment-595b5b9587-tvrtm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tvrtm webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-tvrtm a325d2a3-cbdc-47c3-95c9-9c013f8f41f3 14020670 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002c24c47 0xc002c24c48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.820: INFO: Pod "webserver-deployment-595b5b9587-v8lkl" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v8lkl webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-v8lkl a75e55ae-fe31-46b5-a0d3-59e344897cea 14020665 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002c24d97 0xc002c24d98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.820: INFO: Pod "webserver-deployment-595b5b9587-vf69j" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vf69j webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-vf69j 3bfbd6a8-01e2-4a96-b357-7b2185bae391 14020511 0 2020-05-06 23:03:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002c24ee7 0xc002c24ee8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:03:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:03:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.219,StartTime:2020-05-06 23:03:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 23:04:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7671d7bd94ff8a5e0074eba3bf34f9203b9fce4bbd75cfc5e2d728ff70b94ed1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.219,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.820: INFO: Pod "webserver-deployment-595b5b9587-whsq9" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-whsq9 webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-whsq9 1b424a3b-f86c-4ff4-8dd9-1b3a59f120aa 14020518 0 2020-05-06 23:03:56 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002c250c7 0xc002c250c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:03:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:03:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.47,StartTime:2020-05-06 23:03:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 23:04:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://2b144152e1a73c9296ca9260f65b0e40947dfc259d62dbb928e847c39336d5be,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.47,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.820: INFO: Pod "webserver-deployment-595b5b9587-wr96h" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wr96h webserver-deployment-595b5b9587- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-595b5b9587-wr96h 34ba18e8-42fa-4ce2-8620-8d0053297574 14020635 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 1869a0b7-330d-413e-94e5-16bed94c07e5 0xc002c252a7 0xc002c252a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.821: INFO: Pod "webserver-deployment-c7997dcc8-4vdvb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4vdvb webserver-deployment-c7997dcc8- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-c7997dcc8-4vdvb 32be399e-d3fb-4c4d-a7ab-ce00e7e3f4dd 14020605 0 2020-05-06 23:04:11 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 20387fae-242b-4133-8327-3969d10ddaa4 0xc002c253e7 0xc002c253e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-06 23:04:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.821: INFO: Pod "webserver-deployment-c7997dcc8-9s4gj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9s4gj webserver-deployment-c7997dcc8- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-c7997dcc8-9s4gj c3ef36eb-25e1-46ba-a35a-374f70d9db6e 14020656 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 20387fae-242b-4133-8327-3969d10ddaa4 0xc002c25577 0xc002c25578}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.822: INFO: Pod "webserver-deployment-c7997dcc8-cqqqf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cqqqf webserver-deployment-c7997dcc8- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-c7997dcc8-cqqqf 4b7d371b-5104-42ca-986a-934125f818b2 14020663 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 20387fae-242b-4133-8327-3969d10ddaa4 0xc002c256a7 0xc002c256a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.822: INFO: Pod "webserver-deployment-c7997dcc8-dv5md" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dv5md webserver-deployment-c7997dcc8- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-c7997dcc8-dv5md f7383111-15fc-47e3-9875-5a593a964167 14020658 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 20387fae-242b-4133-8327-3969d10ddaa4 0xc002c25837 0xc002c25838}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.822: INFO: Pod "webserver-deployment-c7997dcc8-gs6fv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gs6fv webserver-deployment-c7997dcc8- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-c7997dcc8-gs6fv 496b95ac-582b-410f-88b5-21f81f074f03 14020664 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 20387fae-242b-4133-8327-3969d10ddaa4 0xc002c259e7 0xc002c259e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.823: INFO: Pod "webserver-deployment-c7997dcc8-kghz5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kghz5 webserver-deployment-c7997dcc8- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-c7997dcc8-kghz5 3e74ebfc-f6fa-41b6-9bf5-9c7725fae201 14020597 0 2020-05-06 23:04:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 20387fae-242b-4133-8327-3969d10ddaa4 0xc002c25b67 0xc002c25b68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-06 23:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.823: INFO: Pod "webserver-deployment-c7997dcc8-mtcvj" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mtcvj webserver-deployment-c7997dcc8- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-c7997dcc8-mtcvj 82766995-6947-4a52-af35-68ed58ba1ca2 14020662 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 20387fae-242b-4133-8327-3969d10ddaa4 0xc002c25d77 0xc002c25d78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.823: INFO: Pod "webserver-deployment-c7997dcc8-qhv4g" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qhv4g webserver-deployment-c7997dcc8- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-c7997dcc8-qhv4g ae578304-7bfe-4bfc-a199-db9dbe36b771 14020673 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 20387fae-242b-4133-8327-3969d10ddaa4 0xc002c25ec7 0xc002c25ec8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.824: INFO: Pod "webserver-deployment-c7997dcc8-qr5vv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qr5vv webserver-deployment-c7997dcc8- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-c7997dcc8-qr5vv 5a0769fb-10ab-47fc-adbe-e69b29f5fdcc 14020584 0 2020-05-06 23:04:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 20387fae-242b-4133-8327-3969d10ddaa4 0xc002bbe057 0xc002bbe058}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-06 23:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.824: INFO: Pod "webserver-deployment-c7997dcc8-tb627" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-tb627 webserver-deployment-c7997dcc8- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-c7997dcc8-tb627 e15f4fd6-cea6-4ea1-905a-08d8463f107d 14020610 0 2020-05-06 23:04:11 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 20387fae-242b-4133-8327-3969d10ddaa4 0xc002bbe237 0xc002bbe238}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:11 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:11 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-06 23:04:11 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.824: INFO: Pod "webserver-deployment-c7997dcc8-x648t" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-x648t webserver-deployment-c7997dcc8- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-c7997dcc8-x648t 1faffa10-e27a-4d71-8ef7-c195928b2e45 14020682 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 20387fae-242b-4133-8327-3969d10ddaa4 0xc002bbe3e7 0xc002bbe3e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-06 23:04:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.826: INFO: Pod "webserver-deployment-c7997dcc8-xthwb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xthwb webserver-deployment-c7997dcc8- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-c7997dcc8-xthwb fb362b9b-3998-4881-833e-5760e38fc167 14020666 0 2020-05-06 23:04:13 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 20387fae-242b-4133-8327-3969d10ddaa4 0xc002bbe597 0xc002bbe598}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:04:13.826: INFO: Pod "webserver-deployment-c7997dcc8-z7wp2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-z7wp2 webserver-deployment-c7997dcc8- deployment-7592 /api/v1/namespaces/deployment-7592/pods/webserver-deployment-c7997dcc8-z7wp2 6ed8c2d2-5379-4db9-8138-f8b9ab042db8 14020587 0 2020-05-06 23:04:10 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 20387fae-242b-4133-8327-3969d10ddaa4 0xc002bbe6d7 0xc002bbe6d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qlpx2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qlpx2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qlpx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:04:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-06 23:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:04:13.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7592" for this suite. • [SLOW TEST:17.515 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":41,"skipped":485,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:04:14.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 6 23:04:14.920: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6782 /api/v1/namespaces/watch-6782/configmaps/e2e-watch-test-resource-version c0cb6a6d-3ad7-45e9-99ec-140f964040ab 14020723 0 2020-05-06 23:04:14 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 6 23:04:14.921: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6782 /api/v1/namespaces/watch-6782/configmaps/e2e-watch-test-resource-version c0cb6a6d-3ad7-45e9-99ec-140f964040ab 14020725 0 2020-05-06 23:04:14 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:04:14.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6782" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":42,"skipped":552,"failed":0} SSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:04:15.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-1417 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1417 STEP: creating replication controller externalsvc in namespace services-1417 I0506 23:04:18.341028 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-1417, replica count: 2 I0506 23:04:21.391589 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 23:04:24.391813 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 23:04:27.392009 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 23:04:30.392208 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 23:04:33.392394 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 23:04:36.392589 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 23:04:39.392820 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 6 23:04:39.712: INFO: Creating new exec pod May 6 23:04:47.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1417 execpodrdf8j -- /bin/sh -x -c nslookup nodeport-service' May 6 23:04:47.971: INFO: stderr: "I0506 23:04:47.896000 1381 log.go:172] (0xc000a7a000) (0xc0006279a0) Create stream\nI0506 23:04:47.896060 1381 log.go:172] (0xc000a7a000) (0xc0006279a0) Stream added, broadcasting: 1\nI0506 23:04:47.899177 1381 log.go:172] (0xc000a7a000) Reply frame received for 1\nI0506 23:04:47.899215 1381 log.go:172] (0xc000a7a000) (0xc000946000) Create stream\nI0506 23:04:47.899227 1381 log.go:172] (0xc000a7a000) (0xc000946000) Stream added, broadcasting: 3\nI0506 23:04:47.900091 1381 log.go:172] (0xc000a7a000) Reply frame received for 3\nI0506 23:04:47.900127 1381 log.go:172] (0xc000a7a000) (0xc0005b05a0) Create stream\nI0506 23:04:47.900138 1381 log.go:172] (0xc000a7a000) (0xc0005b05a0) Stream added, broadcasting: 5\nI0506 23:04:47.901031 1381 log.go:172] (0xc000a7a000) Reply frame received for 5\nI0506 23:04:47.956386 1381 log.go:172] (0xc000a7a000) Data frame received for 5\nI0506 23:04:47.956411 1381 log.go:172] (0xc0005b05a0) (5) Data frame handling\nI0506 23:04:47.956426 1381 log.go:172] (0xc0005b05a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0506 23:04:47.961776 1381 log.go:172] (0xc000a7a000) Data frame received for 3\nI0506 23:04:47.961794 1381 log.go:172] (0xc000946000) (3) Data frame handling\nI0506 23:04:47.961809 1381 log.go:172] (0xc000946000) (3) Data frame sent\nI0506 23:04:47.963264 1381 log.go:172] (0xc000a7a000) Data frame received for 3\nI0506 23:04:47.963293 1381 log.go:172] (0xc000946000) (3) Data frame handling\nI0506 23:04:47.963321 1381 log.go:172] (0xc000946000) (3) Data frame sent\nI0506 23:04:47.963767 1381 log.go:172] (0xc000a7a000) Data frame received for 5\nI0506 23:04:47.963815 1381 log.go:172] (0xc0005b05a0) (5) Data frame handling\nI0506 23:04:47.963845 1381 log.go:172] (0xc000a7a000) Data frame received for 3\nI0506 23:04:47.963870 1381 log.go:172] (0xc000946000) (3) Data frame handling\nI0506 23:04:47.966433 1381 log.go:172] (0xc000a7a000) Data frame received for 1\nI0506 23:04:47.966449 1381 log.go:172] (0xc0006279a0) (1) Data frame handling\nI0506 23:04:47.966458 1381 log.go:172] (0xc0006279a0) (1) Data frame sent\nI0506 23:04:47.966485 1381 log.go:172] (0xc000a7a000) (0xc0006279a0) Stream removed, broadcasting: 1\nI0506 23:04:47.966509 1381 log.go:172] (0xc000a7a000) Go away received\nI0506 23:04:47.966910 1381 log.go:172] (0xc000a7a000) (0xc0006279a0) Stream removed, broadcasting: 1\nI0506 23:04:47.966934 1381 log.go:172] (0xc000a7a000) (0xc000946000) Stream removed, broadcasting: 3\nI0506 23:04:47.966945 1381 log.go:172] (0xc000a7a000) (0xc0005b05a0) Stream removed, broadcasting: 5\n" May 6 23:04:47.971: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-1417.svc.cluster.local\tcanonical name = externalsvc.services-1417.svc.cluster.local.\nName:\texternalsvc.services-1417.svc.cluster.local\nAddress: 10.102.57.191\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1417, will wait for the garbage collector to delete the pods May 6 23:04:48.031: INFO: Deleting ReplicationController externalsvc took: 6.450165ms May 6 23:04:48.132: INFO: Terminating ReplicationController externalsvc pods took: 100.285354ms May 6 23:04:59.640: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:04:59.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1417" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:44.277 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":43,"skipped":556,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:04:59.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5508 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-5508 STEP: creating replication controller externalsvc in namespace services-5508 I0506 23:04:59.862254 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5508, replica count: 2 I0506 23:05:02.912689 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 23:05:05.912970 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 6 23:05:06.207: INFO: Creating new exec pod May 6 23:05:12.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5508 execpodf4rlw -- /bin/sh -x -c nslookup clusterip-service' May 6 23:05:12.637: INFO: stderr: "I0506 23:05:12.560450 1401 log.go:172] (0xc0008c4000) (0xc0007dfa40) Create stream\nI0506 23:05:12.560508 1401 log.go:172] (0xc0008c4000) (0xc0007dfa40) Stream added, broadcasting: 1\nI0506 23:05:12.563087 1401 log.go:172] (0xc0008c4000) Reply frame received for 1\nI0506 23:05:12.563128 1401 log.go:172] (0xc0008c4000) (0xc0007dfae0) Create stream\nI0506 23:05:12.563139 1401 log.go:172] (0xc0008c4000) (0xc0007dfae0) Stream added, broadcasting: 3\nI0506 23:05:12.564150 1401 log.go:172] (0xc0008c4000) Reply frame received for 3\nI0506 23:05:12.564183 1401 log.go:172] (0xc0008c4000) (0xc0008aa000) Create stream\nI0506 23:05:12.564192 1401 log.go:172] (0xc0008c4000) (0xc0008aa000) Stream added, broadcasting: 5\nI0506 23:05:12.565022 1401 log.go:172] (0xc0008c4000) Reply frame received for 5\nI0506 23:05:12.616190 1401 log.go:172] (0xc0008c4000) Data frame received for 5\nI0506 23:05:12.616215 1401 log.go:172] (0xc0008aa000) (5) Data frame handling\nI0506 23:05:12.616227 1401 log.go:172] (0xc0008aa000) (5) Data frame sent\n+ nslookup clusterip-service\nI0506 23:05:12.626354 1401 log.go:172] (0xc0008c4000) Data frame received for 3\nI0506 23:05:12.626380 1401 log.go:172] (0xc0007dfae0) (3) Data frame handling\nI0506 23:05:12.626405 1401 log.go:172] (0xc0007dfae0) (3) Data frame sent\nI0506 23:05:12.627214 1401 log.go:172] (0xc0008c4000) Data frame received for 3\nI0506 23:05:12.627234 1401 log.go:172] (0xc0007dfae0) (3) Data frame handling\nI0506 23:05:12.627246 1401 log.go:172] (0xc0007dfae0) (3) Data frame sent\nI0506 23:05:12.627736 1401 log.go:172] (0xc0008c4000) Data frame received for 5\nI0506 23:05:12.627763 1401 log.go:172] (0xc0008aa000) (5) Data frame handling\nI0506 23:05:12.627785 1401 log.go:172] (0xc0008c4000) Data frame received for 3\nI0506 23:05:12.627818 1401 log.go:172] (0xc0007dfae0) (3) Data frame handling\nI0506 23:05:12.629780 1401 log.go:172] (0xc0008c4000) Data frame received for 1\nI0506 23:05:12.629803 1401 log.go:172] (0xc0007dfa40) (1) Data frame handling\nI0506 23:05:12.629819 1401 log.go:172] (0xc0007dfa40) (1) Data frame sent\nI0506 23:05:12.629894 1401 log.go:172] (0xc0008c4000) (0xc0007dfa40) Stream removed, broadcasting: 1\nI0506 23:05:12.630165 1401 log.go:172] (0xc0008c4000) (0xc0007dfa40) Stream removed, broadcasting: 1\nI0506 23:05:12.630189 1401 log.go:172] (0xc0008c4000) (0xc0007dfae0) Stream removed, broadcasting: 3\nI0506 23:05:12.630203 1401 log.go:172] (0xc0008c4000) (0xc0008aa000) Stream removed, broadcasting: 5\n" May 6 23:05:12.637: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5508.svc.cluster.local\tcanonical name = externalsvc.services-5508.svc.cluster.local.\nName:\texternalsvc.services-5508.svc.cluster.local\nAddress: 10.105.218.37\n\n" STEP: deleting ReplicationController externalsvc in namespace services-5508, will wait for the garbage collector to delete the pods May 6 23:05:12.697: INFO: Deleting ReplicationController externalsvc took: 6.791171ms May 6 23:05:13.197: INFO: Terminating ReplicationController externalsvc pods took: 500.265005ms May 6 23:05:18.223: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:05:18.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5508" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:18.582 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":44,"skipped":616,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:05:18.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:05:23.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4794" for this suite. • [SLOW TEST:5.225 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":45,"skipped":655,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:05:23.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 6 23:05:23.582: INFO: Pod name pod-release: Found 0 pods out of 1 May 6 23:05:28.585: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:05:28.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9246" for this suite. • [SLOW TEST:5.251 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":46,"skipped":661,"failed":0} SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:05:28.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:05:29.011: INFO: (0) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 24.860784ms) May 6 23:05:29.014: INFO: (1) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.690263ms) May 6 23:05:29.019: INFO: (2) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 4.642491ms) May 6 23:05:29.026: INFO: (3) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 6.668691ms) May 6 23:05:29.029: INFO: (4) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.299615ms) May 6 23:05:29.033: INFO: (5) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.61398ms) May 6 23:05:29.036: INFO: (6) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.069692ms) May 6 23:05:29.074: INFO: (7) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 38.229854ms) May 6 23:05:29.221: INFO: (8) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 146.52062ms) May 6 23:05:29.225: INFO: (9) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 4.79476ms) May 6 23:05:29.229: INFO: (10) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.417552ms) May 6 23:05:29.232: INFO: (11) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.159381ms) May 6 23:05:29.235: INFO: (12) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.012311ms) May 6 23:05:29.238: INFO: (13) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.042045ms) May 6 23:05:29.489: INFO: (14) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 250.972243ms) May 6 23:05:29.492: INFO: (15) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.206113ms) May 6 23:05:29.495: INFO: (16) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.962483ms) May 6 23:05:29.499: INFO: (17) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.258673ms) May 6 23:05:29.502: INFO: (18) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 3.016163ms) May 6 23:05:29.504: INFO: (19) /api/v1/nodes/jerma-worker2/proxy/logs/:
containers/
pods/
(200; 2.621706ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:05:29.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4609" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":47,"skipped":664,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:05:29.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 23:05:30.731: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 23:05:32.741: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403130, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403130, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403130, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403130, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:05:34.745: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403130, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403130, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403130, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403130, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 23:05:37.776: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:05:37.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-109" for this suite. STEP: Destroying namespace "webhook-109-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.536 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":48,"skipped":691,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:05:38.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 23:05:38.726: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 23:05:40.824: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403138, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403138, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403138, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403138, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:05:42.828: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403138, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403138, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403138, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403138, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 23:05:45.867: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:05:46.459: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4185" for this suite. STEP: Destroying namespace "webhook-4185-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.620 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":49,"skipped":700,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:05:46.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:05:46.818: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 6 23:05:50.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6301 create -f -' May 6 23:05:53.243: INFO: stderr: "" May 6 23:05:53.243: INFO: stdout: "e2e-test-crd-publish-openapi-6075-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 6 23:05:53.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6301 delete e2e-test-crd-publish-openapi-6075-crds test-cr' May 6 23:05:53.357: INFO: stderr: "" May 6 23:05:53.357: INFO: stdout: "e2e-test-crd-publish-openapi-6075-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 6 23:05:53.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6301 apply -f -' May 6 23:05:53.630: INFO: stderr: "" May 6 23:05:53.630: INFO: stdout: "e2e-test-crd-publish-openapi-6075-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 6 23:05:53.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6301 delete e2e-test-crd-publish-openapi-6075-crds test-cr' May 6 23:05:53.746: INFO: stderr: "" May 6 23:05:53.746: INFO: stdout: "e2e-test-crd-publish-openapi-6075-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 6 23:05:53.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-6075-crds' May 6 23:05:54.028: INFO: stderr: "" May 6 23:05:54.028: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-6075-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:05:56.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6301" for this suite. • [SLOW TEST:10.239 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":50,"skipped":756,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:05:56.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-afff2abb-6f7a-4862-9e04-88400948a55b STEP: Creating a pod to test consume configMaps May 6 23:05:57.033: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-87bf7447-1a17-4f30-9811-94c08c1f7f42" in namespace "projected-8035" to be "success or failure" May 6 23:05:57.046: INFO: Pod "pod-projected-configmaps-87bf7447-1a17-4f30-9811-94c08c1f7f42": Phase="Pending", Reason="", readiness=false. Elapsed: 13.693654ms May 6 23:05:59.053: INFO: Pod "pod-projected-configmaps-87bf7447-1a17-4f30-9811-94c08c1f7f42": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01982879s May 6 23:06:01.057: INFO: Pod "pod-projected-configmaps-87bf7447-1a17-4f30-9811-94c08c1f7f42": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024348498s STEP: Saw pod success May 6 23:06:01.057: INFO: Pod "pod-projected-configmaps-87bf7447-1a17-4f30-9811-94c08c1f7f42" satisfied condition "success or failure" May 6 23:06:01.060: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-87bf7447-1a17-4f30-9811-94c08c1f7f42 container projected-configmap-volume-test: STEP: delete the pod May 6 23:06:01.123: INFO: Waiting for pod pod-projected-configmaps-87bf7447-1a17-4f30-9811-94c08c1f7f42 to disappear May 6 23:06:01.138: INFO: Pod pod-projected-configmaps-87bf7447-1a17-4f30-9811-94c08c1f7f42 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:06:01.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8035" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":772,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:06:01.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium May 6 23:06:01.281: INFO: Waiting up to 5m0s for pod "pod-9b49d7e4-d0be-4a61-b9e8-154c6b4f8351" in namespace "emptydir-3779" to be "success or failure" May 6 23:06:01.292: INFO: Pod "pod-9b49d7e4-d0be-4a61-b9e8-154c6b4f8351": Phase="Pending", Reason="", readiness=false. Elapsed: 10.161392ms May 6 23:06:03.339: INFO: Pod "pod-9b49d7e4-d0be-4a61-b9e8-154c6b4f8351": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057262123s May 6 23:06:05.356: INFO: Pod "pod-9b49d7e4-d0be-4a61-b9e8-154c6b4f8351": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074829355s STEP: Saw pod success May 6 23:06:05.356: INFO: Pod "pod-9b49d7e4-d0be-4a61-b9e8-154c6b4f8351" satisfied condition "success or failure" May 6 23:06:05.359: INFO: Trying to get logs from node jerma-worker pod pod-9b49d7e4-d0be-4a61-b9e8-154c6b4f8351 container test-container: STEP: delete the pod May 6 23:06:05.470: INFO: Waiting for pod pod-9b49d7e4-d0be-4a61-b9e8-154c6b4f8351 to disappear May 6 23:06:05.482: INFO: Pod pod-9b49d7e4-d0be-4a61-b9e8-154c6b4f8351 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:06:05.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3779" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":790,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:06:05.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 23:06:05.955: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 23:06:07.964: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403165, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403165, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403166, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403165, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 23:06:11.004: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:06:11.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3441" for this suite. STEP: Destroying namespace "webhook-3441-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.596 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":53,"skipped":804,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:06:11.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments May 6 23:06:11.145: INFO: Waiting up to 5m0s for pod "client-containers-d9b77743-0553-4409-890b-a12760581655" in namespace "containers-4139" to be "success or failure" May 6 23:06:11.178: INFO: Pod "client-containers-d9b77743-0553-4409-890b-a12760581655": Phase="Pending", Reason="", readiness=false. Elapsed: 32.739156ms May 6 23:06:13.181: INFO: Pod "client-containers-d9b77743-0553-4409-890b-a12760581655": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03662128s May 6 23:06:15.185: INFO: Pod "client-containers-d9b77743-0553-4409-890b-a12760581655": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040104349s STEP: Saw pod success May 6 23:06:15.185: INFO: Pod "client-containers-d9b77743-0553-4409-890b-a12760581655" satisfied condition "success or failure" May 6 23:06:15.188: INFO: Trying to get logs from node jerma-worker2 pod client-containers-d9b77743-0553-4409-890b-a12760581655 container test-container: STEP: delete the pod May 6 23:06:15.208: INFO: Waiting for pod client-containers-d9b77743-0553-4409-890b-a12760581655 to disappear May 6 23:06:15.285: INFO: Pod client-containers-d9b77743-0553-4409-890b-a12760581655 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:06:15.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4139" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":817,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:06:15.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 23:06:15.455: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c7d8a1b-4489-401b-b19a-3c7efc1951b0" in namespace "downward-api-9606" to be "success or failure" May 6 23:06:15.483: INFO: Pod "downwardapi-volume-1c7d8a1b-4489-401b-b19a-3c7efc1951b0": Phase="Pending", Reason="", readiness=false. Elapsed: 28.541718ms May 6 23:06:17.486: INFO: Pod "downwardapi-volume-1c7d8a1b-4489-401b-b19a-3c7efc1951b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031649962s May 6 23:06:19.490: INFO: Pod "downwardapi-volume-1c7d8a1b-4489-401b-b19a-3c7efc1951b0": Phase="Running", Reason="", readiness=true. Elapsed: 4.0356645s May 6 23:06:21.495: INFO: Pod "downwardapi-volume-1c7d8a1b-4489-401b-b19a-3c7efc1951b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040136943s STEP: Saw pod success May 6 23:06:21.495: INFO: Pod "downwardapi-volume-1c7d8a1b-4489-401b-b19a-3c7efc1951b0" satisfied condition "success or failure" May 6 23:06:21.498: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1c7d8a1b-4489-401b-b19a-3c7efc1951b0 container client-container: STEP: delete the pod May 6 23:06:21.514: INFO: Waiting for pod downwardapi-volume-1c7d8a1b-4489-401b-b19a-3c7efc1951b0 to disappear May 6 23:06:21.518: INFO: Pod downwardapi-volume-1c7d8a1b-4489-401b-b19a-3c7efc1951b0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:06:21.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9606" for this suite. • [SLOW TEST:6.170 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":55,"skipped":848,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:06:21.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0506 23:06:52.127234 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 23:06:52.127: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:06:52.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5557" for this suite. • [SLOW TEST:30.610 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":56,"skipped":855,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:06:52.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller May 6 23:06:52.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8686' May 6 23:06:52.538: INFO: stderr: "" May 6 23:06:52.538: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 23:06:52.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8686' May 6 23:06:52.664: INFO: stderr: "" May 6 23:06:52.664: INFO: stdout: "update-demo-nautilus-4h6nr update-demo-nautilus-znjt6 " May 6 23:06:52.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4h6nr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8686' May 6 23:06:52.777: INFO: stderr: "" May 6 23:06:52.777: INFO: stdout: "" May 6 23:06:52.777: INFO: update-demo-nautilus-4h6nr is created but not running May 6 23:06:57.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8686' May 6 23:06:58.763: INFO: stderr: "" May 6 23:06:58.763: INFO: stdout: "update-demo-nautilus-4h6nr update-demo-nautilus-znjt6 " May 6 23:06:58.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4h6nr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8686' May 6 23:06:59.171: INFO: stderr: "" May 6 23:06:59.171: INFO: stdout: "true" May 6 23:06:59.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4h6nr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8686' May 6 23:06:59.431: INFO: stderr: "" May 6 23:06:59.431: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 23:06:59.431: INFO: validating pod update-demo-nautilus-4h6nr May 6 23:06:59.480: INFO: got data: { "image": "nautilus.jpg" } May 6 23:06:59.480: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 23:06:59.480: INFO: update-demo-nautilus-4h6nr is verified up and running May 6 23:06:59.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-znjt6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8686' May 6 23:06:59.623: INFO: stderr: "" May 6 23:06:59.623: INFO: stdout: "true" May 6 23:06:59.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-znjt6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8686' May 6 23:06:59.717: INFO: stderr: "" May 6 23:06:59.717: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 6 23:06:59.717: INFO: validating pod update-demo-nautilus-znjt6 May 6 23:06:59.742: INFO: got data: { "image": "nautilus.jpg" } May 6 23:06:59.742: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 6 23:06:59.742: INFO: update-demo-nautilus-znjt6 is verified up and running STEP: rolling-update to new replication controller May 6 23:06:59.745: INFO: scanned /root for discovery docs: May 6 23:06:59.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8686' May 6 23:07:23.359: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 6 23:07:23.359: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 6 23:07:23.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8686' May 6 23:07:23.463: INFO: stderr: "" May 6 23:07:23.463: INFO: stdout: "update-demo-kitten-v9srx update-demo-kitten-wgl7g " May 6 23:07:23.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-v9srx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8686' May 6 23:07:23.567: INFO: stderr: "" May 6 23:07:23.567: INFO: stdout: "true" May 6 23:07:23.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-v9srx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8686' May 6 23:07:23.665: INFO: stderr: "" May 6 23:07:23.665: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 6 23:07:23.665: INFO: validating pod update-demo-kitten-v9srx May 6 23:07:23.670: INFO: got data: { "image": "kitten.jpg" } May 6 23:07:23.670: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 6 23:07:23.670: INFO: update-demo-kitten-v9srx is verified up and running May 6 23:07:23.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wgl7g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8686' May 6 23:07:23.772: INFO: stderr: "" May 6 23:07:23.772: INFO: stdout: "true" May 6 23:07:23.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wgl7g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8686' May 6 23:07:23.867: INFO: stderr: "" May 6 23:07:23.867: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 6 23:07:23.867: INFO: validating pod update-demo-kitten-wgl7g May 6 23:07:23.871: INFO: got data: { "image": "kitten.jpg" } May 6 23:07:23.871: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 6 23:07:23.871: INFO: update-demo-kitten-wgl7g is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:07:23.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8686" for this suite. • [SLOW TEST:31.740 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":57,"skipped":881,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:07:23.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 6 23:07:28.523: INFO: Successfully updated pod "annotationupdate5d0f138a-5eb5-4c1b-87e1-45e3990ff5fd" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:07:32.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9190" for this suite. • [SLOW TEST:9.125 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":885,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:07:33.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:07:33.840: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 6 23:07:38.993: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 6 23:07:38.994: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 6 23:07:39.134: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-4691 /apis/apps/v1/namespaces/deployment-4691/deployments/test-cleanup-deployment 3a48f333-f07f-419b-afec-5e59f9b96b99 14022290 1 2020-05-06 23:07:39 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002057b28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 6 23:07:39.222: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-4691 /apis/apps/v1/namespaces/deployment-4691/replicasets/test-cleanup-deployment-55ffc6b7b6 e3c88cee-ece4-4f08-a918-afe64b364056 14022297 1 2020-05-06 23:07:39 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 3a48f333-f07f-419b-afec-5e59f9b96b99 0xc003bb6217 0xc003bb6218}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003bb62f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 23:07:39.222: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 6 23:07:39.222: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-4691 /apis/apps/v1/namespaces/deployment-4691/replicasets/test-cleanup-controller 0bcadc94-3609-48bb-bb26-386f3eaf3c9d 14022291 1 2020-05-06 23:07:33 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 3a48f333-f07f-419b-afec-5e59f9b96b99 0xc003bb60af 0xc003bb60d0}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003bb6168 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 6 23:07:39.540: INFO: Pod "test-cleanup-controller-v4sjv" is available: &Pod{ObjectMeta:{test-cleanup-controller-v4sjv test-cleanup-controller- deployment-4691 /api/v1/namespaces/deployment-4691/pods/test-cleanup-controller-v4sjv 42939d9c-0087-4f82-a33e-f63b1677074e 14022276 0 2020-05-06 23:07:33 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 0bcadc94-3609-48bb-bb26-386f3eaf3c9d 0xc00627c697 0xc00627c698}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4cxgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4cxgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4cxgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:07:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:07:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:07:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:07:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.77,StartTime:2020-05-06 23:07:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 23:07:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://59d040c7c8c1ab39341a3809550845b09ebc45e225f56b60fc0a34703e2a8b0c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.77,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 6 23:07:39.540: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-zq5bc" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-zq5bc test-cleanup-deployment-55ffc6b7b6- deployment-4691 /api/v1/namespaces/deployment-4691/pods/test-cleanup-deployment-55ffc6b7b6-zq5bc ed3dc4e4-068a-46e2-94f7-87ae601ba7c6 14022298 0 2020-05-06 23:07:39 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 e3c88cee-ece4-4f08-a918-afe64b364056 0xc00627c827 0xc00627c828}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4cxgd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4cxgd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4cxgd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:07:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:07:39.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4691" for this suite. • [SLOW TEST:6.740 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":59,"skipped":892,"failed":0} SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:07:39.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:07:39.885: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.842195ms) May 6 23:07:39.889: INFO: (1) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.051902ms) May 6 23:07:39.892: INFO: (2) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.342776ms) May 6 23:07:39.910: INFO: (3) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 17.496793ms) May 6 23:07:39.914: INFO: (4) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.079532ms) May 6 23:07:39.918: INFO: (5) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.396797ms) May 6 23:07:39.921: INFO: (6) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.387013ms) May 6 23:07:39.948: INFO: (7) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 26.778383ms) May 6 23:07:39.952: INFO: (8) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.93524ms) May 6 23:07:39.956: INFO: (9) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.052833ms) May 6 23:07:39.960: INFO: (10) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.819784ms) May 6 23:07:39.963: INFO: (11) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.539079ms) May 6 23:07:39.967: INFO: (12) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.552552ms) May 6 23:07:39.970: INFO: (13) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.462591ms) May 6 23:07:39.974: INFO: (14) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.314514ms) May 6 23:07:39.978: INFO: (15) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.078902ms) May 6 23:07:39.982: INFO: (16) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.905912ms) May 6 23:07:39.986: INFO: (17) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.83309ms) May 6 23:07:39.990: INFO: (18) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.165384ms) May 6 23:07:40.000: INFO: (19) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 9.845017ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:07:40.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3530" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":60,"skipped":897,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:07:40.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:07:51.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-350" for this suite. • [SLOW TEST:11.233 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":61,"skipped":912,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:07:51.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:07:51.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9424" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":62,"skipped":930,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:07:51.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-59f4ca7a-f23f-450c-b91f-848f8350f6fd STEP: Creating a pod to test consume configMaps May 6 23:07:51.488: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-409ad0ce-13be-4319-88ec-8c125702fbfd" in namespace "projected-1506" to be "success or failure" May 6 23:07:51.550: INFO: Pod "pod-projected-configmaps-409ad0ce-13be-4319-88ec-8c125702fbfd": Phase="Pending", Reason="", readiness=false. Elapsed: 62.234552ms May 6 23:07:53.565: INFO: Pod "pod-projected-configmaps-409ad0ce-13be-4319-88ec-8c125702fbfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077679574s May 6 23:07:55.731: INFO: Pod "pod-projected-configmaps-409ad0ce-13be-4319-88ec-8c125702fbfd": Phase="Running", Reason="", readiness=true. Elapsed: 4.242949548s May 6 23:07:57.741: INFO: Pod "pod-projected-configmaps-409ad0ce-13be-4319-88ec-8c125702fbfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.253354523s STEP: Saw pod success May 6 23:07:57.741: INFO: Pod "pod-projected-configmaps-409ad0ce-13be-4319-88ec-8c125702fbfd" satisfied condition "success or failure" May 6 23:07:57.743: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-409ad0ce-13be-4319-88ec-8c125702fbfd container projected-configmap-volume-test: STEP: delete the pod May 6 23:07:58.075: INFO: Waiting for pod pod-projected-configmaps-409ad0ce-13be-4319-88ec-8c125702fbfd to disappear May 6 23:07:58.128: INFO: Pod pod-projected-configmaps-409ad0ce-13be-4319-88ec-8c125702fbfd no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:07:58.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1506" for this suite. • [SLOW TEST:6.747 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":972,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:07:58.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-7d674ea8-0f00-4827-ae4e-105ccd4b5f1b STEP: Creating a pod to test consume configMaps May 6 23:07:58.696: INFO: Waiting up to 5m0s for pod "pod-configmaps-22cedd83-6587-435e-8eb5-94594de42703" in namespace "configmap-8302" to be "success or failure" May 6 23:07:58.719: INFO: Pod "pod-configmaps-22cedd83-6587-435e-8eb5-94594de42703": Phase="Pending", Reason="", readiness=false. Elapsed: 23.070895ms May 6 23:08:00.773: INFO: Pod "pod-configmaps-22cedd83-6587-435e-8eb5-94594de42703": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076671614s May 6 23:08:02.832: INFO: Pod "pod-configmaps-22cedd83-6587-435e-8eb5-94594de42703": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.135748208s STEP: Saw pod success May 6 23:08:02.832: INFO: Pod "pod-configmaps-22cedd83-6587-435e-8eb5-94594de42703" satisfied condition "success or failure" May 6 23:08:02.835: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-22cedd83-6587-435e-8eb5-94594de42703 container configmap-volume-test: STEP: delete the pod May 6 23:08:03.109: INFO: Waiting for pod pod-configmaps-22cedd83-6587-435e-8eb5-94594de42703 to disappear May 6 23:08:03.119: INFO: Pod pod-configmaps-22cedd83-6587-435e-8eb5-94594de42703 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:08:03.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8302" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":980,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:08:03.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 23:08:03.248: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9168817a-5123-44e6-8e5e-7047cf14a672" in namespace "projected-9835" to be "success or failure" May 6 23:08:03.300: INFO: Pod "downwardapi-volume-9168817a-5123-44e6-8e5e-7047cf14a672": Phase="Pending", Reason="", readiness=false. Elapsed: 52.013739ms May 6 23:08:05.376: INFO: Pod "downwardapi-volume-9168817a-5123-44e6-8e5e-7047cf14a672": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12859586s May 6 23:08:07.389: INFO: Pod "downwardapi-volume-9168817a-5123-44e6-8e5e-7047cf14a672": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141593021s STEP: Saw pod success May 6 23:08:07.389: INFO: Pod "downwardapi-volume-9168817a-5123-44e6-8e5e-7047cf14a672" satisfied condition "success or failure" May 6 23:08:07.392: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-9168817a-5123-44e6-8e5e-7047cf14a672 container client-container: STEP: delete the pod May 6 23:08:07.540: INFO: Waiting for pod downwardapi-volume-9168817a-5123-44e6-8e5e-7047cf14a672 to disappear May 6 23:08:07.551: INFO: Pod downwardapi-volume-9168817a-5123-44e6-8e5e-7047cf14a672 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:08:07.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9835" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1002,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:08:07.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-4938 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-4938 STEP: Deleting pre-stop pod May 6 23:08:21.819: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:08:21.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-4938" for this suite. • [SLOW TEST:14.285 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":66,"skipped":1021,"failed":0} [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:08:21.843: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 6 23:08:22.270: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:08:22.274: INFO: Number of nodes with available pods: 0 May 6 23:08:22.274: INFO: Node jerma-worker is running more than one daemon pod May 6 23:08:23.326: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:08:23.329: INFO: Number of nodes with available pods: 0 May 6 23:08:23.329: INFO: Node jerma-worker is running more than one daemon pod May 6 23:08:24.279: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:08:24.282: INFO: Number of nodes with available pods: 0 May 6 23:08:24.282: INFO: Node jerma-worker is running more than one daemon pod May 6 23:08:25.283: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:08:25.301: INFO: Number of nodes with available pods: 0 May 6 23:08:25.301: INFO: Node jerma-worker is running more than one daemon pod May 6 23:08:26.279: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:08:26.282: INFO: Number of nodes with available pods: 1 May 6 23:08:26.282: INFO: Node jerma-worker2 is running more than one daemon pod May 6 23:08:27.288: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:08:27.291: INFO: Number of nodes with available pods: 2 May 6 23:08:27.291: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 6 23:08:27.319: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:08:27.325: INFO: Number of nodes with available pods: 2 May 6 23:08:27.325: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3247, will wait for the garbage collector to delete the pods May 6 23:08:28.448: INFO: Deleting DaemonSet.extensions daemon-set took: 6.197784ms May 6 23:08:28.748: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.236063ms May 6 23:08:32.452: INFO: Number of nodes with available pods: 0 May 6 23:08:32.452: INFO: Number of running nodes: 0, number of available pods: 0 May 6 23:08:32.458: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3247/daemonsets","resourceVersion":"14022714"},"items":null} May 6 23:08:32.461: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3247/pods","resourceVersion":"14022714"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:08:32.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3247" for this suite. • [SLOW TEST:10.635 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":67,"skipped":1021,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:08:32.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod May 6 23:08:32.530: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-455' May 6 23:08:32.902: INFO: stderr: "" May 6 23:08:32.902: INFO: stdout: "pod/pause created\n" May 6 23:08:32.902: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 6 23:08:32.902: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-455" to be "running and ready" May 6 23:08:32.907: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.266357ms May 6 23:08:34.914: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011936785s May 6 23:08:36.918: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.016510585s May 6 23:08:36.918: INFO: Pod "pause" satisfied condition "running and ready" May 6 23:08:36.918: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod May 6 23:08:36.918: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-455' May 6 23:08:37.022: INFO: stderr: "" May 6 23:08:37.022: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 6 23:08:37.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-455' May 6 23:08:37.118: INFO: stderr: "" May 6 23:08:37.118: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 6 23:08:37.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-455' May 6 23:08:37.219: INFO: stderr: "" May 6 23:08:37.219: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 6 23:08:37.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-455' May 6 23:08:37.329: INFO: stderr: "" May 6 23:08:37.329: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources May 6 23:08:37.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-455' May 6 23:08:37.501: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 23:08:37.501: INFO: stdout: "pod \"pause\" force deleted\n" May 6 23:08:37.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-455' May 6 23:08:37.657: INFO: stderr: "No resources found in kubectl-455 namespace.\n" May 6 23:08:37.657: INFO: stdout: "" May 6 23:08:37.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-455 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 6 23:08:37.812: INFO: stderr: "" May 6 23:08:37.812: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:08:37.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-455" for this suite. • [SLOW TEST:5.475 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":68,"skipped":1028,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:08:37.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:08:38.237: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-43bec6f9-7cba-472a-9834-3120fa4f6ece" in namespace "security-context-test-8494" to be "success or failure" May 6 23:08:38.365: INFO: Pod "alpine-nnp-false-43bec6f9-7cba-472a-9834-3120fa4f6ece": Phase="Pending", Reason="", readiness=false. Elapsed: 127.594394ms May 6 23:08:40.369: INFO: Pod "alpine-nnp-false-43bec6f9-7cba-472a-9834-3120fa4f6ece": Phase="Pending", Reason="", readiness=false. Elapsed: 2.131293161s May 6 23:08:42.373: INFO: Pod "alpine-nnp-false-43bec6f9-7cba-472a-9834-3120fa4f6ece": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.136078073s May 6 23:08:42.373: INFO: Pod "alpine-nnp-false-43bec6f9-7cba-472a-9834-3120fa4f6ece" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:08:42.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-8494" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1042,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:08:42.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions May 6 23:08:42.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 6 23:08:43.009: INFO: stderr: "" May 6 23:08:43.010: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:08:43.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1524" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":70,"skipped":1045,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:08:43.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-9667 STEP: creating replication controller nodeport-test in namespace services-9667 I0506 23:08:43.622575 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-9667, replica count: 2 I0506 23:08:46.673083 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 23:08:49.673523 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 23:08:49.673: INFO: Creating new exec pod May 6 23:08:54.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9667 execpod5kc95 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 6 23:08:54.925: INFO: stderr: "I0506 23:08:54.813820 1999 log.go:172] (0xc000105290) (0xc0006bbb80) Create stream\nI0506 23:08:54.813860 1999 log.go:172] (0xc000105290) (0xc0006bbb80) Stream added, broadcasting: 1\nI0506 23:08:54.815621 1999 log.go:172] (0xc000105290) Reply frame received for 1\nI0506 23:08:54.815685 1999 log.go:172] (0xc000105290) (0xc000562000) Create stream\nI0506 23:08:54.815709 1999 log.go:172] (0xc000105290) (0xc000562000) Stream added, broadcasting: 3\nI0506 23:08:54.816746 1999 log.go:172] (0xc000105290) Reply frame received for 3\nI0506 23:08:54.816778 1999 log.go:172] (0xc000105290) (0xc0006bbd60) Create stream\nI0506 23:08:54.816788 1999 log.go:172] (0xc000105290) (0xc0006bbd60) Stream added, broadcasting: 5\nI0506 23:08:54.817808 1999 log.go:172] (0xc000105290) Reply frame received for 5\nI0506 23:08:54.918870 1999 log.go:172] (0xc000105290) Data frame received for 5\nI0506 23:08:54.918892 1999 log.go:172] (0xc0006bbd60) (5) Data frame handling\nI0506 23:08:54.918905 1999 log.go:172] (0xc0006bbd60) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0506 23:08:54.919321 1999 log.go:172] (0xc000105290) Data frame received for 5\nI0506 23:08:54.919342 1999 log.go:172] (0xc0006bbd60) (5) Data frame handling\nI0506 23:08:54.919357 1999 log.go:172] (0xc0006bbd60) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0506 23:08:54.919399 1999 log.go:172] (0xc000105290) Data frame received for 5\nI0506 23:08:54.919454 1999 log.go:172] (0xc0006bbd60) (5) Data frame handling\nI0506 23:08:54.919721 1999 log.go:172] (0xc000105290) Data frame received for 3\nI0506 23:08:54.919733 1999 log.go:172] (0xc000562000) (3) Data frame handling\nI0506 23:08:54.920914 1999 log.go:172] (0xc000105290) Data frame received for 1\nI0506 23:08:54.920930 1999 log.go:172] (0xc0006bbb80) (1) Data frame handling\nI0506 23:08:54.920944 1999 log.go:172] (0xc0006bbb80) (1) Data frame sent\nI0506 23:08:54.920956 1999 log.go:172] (0xc000105290) (0xc0006bbb80) Stream removed, broadcasting: 1\nI0506 23:08:54.920971 1999 log.go:172] (0xc000105290) Go away received\nI0506 23:08:54.921434 1999 log.go:172] (0xc000105290) (0xc0006bbb80) Stream removed, broadcasting: 1\nI0506 23:08:54.921474 1999 log.go:172] (0xc000105290) (0xc000562000) Stream removed, broadcasting: 3\nI0506 23:08:54.921492 1999 log.go:172] (0xc000105290) (0xc0006bbd60) Stream removed, broadcasting: 5\n" May 6 23:08:54.925: INFO: stdout: "" May 6 23:08:54.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9667 execpod5kc95 -- /bin/sh -x -c nc -zv -t -w 2 10.101.58.56 80' May 6 23:08:55.110: INFO: stderr: "I0506 23:08:55.048212 2020 log.go:172] (0xc0001f0d10) (0xc000695c20) Create stream\nI0506 23:08:55.048272 2020 log.go:172] (0xc0001f0d10) (0xc000695c20) Stream added, broadcasting: 1\nI0506 23:08:55.050086 2020 log.go:172] (0xc0001f0d10) Reply frame received for 1\nI0506 23:08:55.050111 2020 log.go:172] (0xc0001f0d10) (0xc0005e0640) Create stream\nI0506 23:08:55.050118 2020 log.go:172] (0xc0001f0d10) (0xc0005e0640) Stream added, broadcasting: 3\nI0506 23:08:55.050663 2020 log.go:172] (0xc0001f0d10) Reply frame received for 3\nI0506 23:08:55.050685 2020 log.go:172] (0xc0001f0d10) (0xc000695cc0) Create stream\nI0506 23:08:55.050699 2020 log.go:172] (0xc0001f0d10) (0xc000695cc0) Stream added, broadcasting: 5\nI0506 23:08:55.051382 2020 log.go:172] (0xc0001f0d10) Reply frame received for 5\nI0506 23:08:55.104087 2020 log.go:172] (0xc0001f0d10) Data frame received for 5\nI0506 23:08:55.104126 2020 log.go:172] (0xc000695cc0) (5) Data frame handling\nI0506 23:08:55.104143 2020 log.go:172] (0xc000695cc0) (5) Data frame sent\nI0506 23:08:55.104153 2020 log.go:172] (0xc0001f0d10) Data frame received for 5\nI0506 23:08:55.104166 2020 log.go:172] (0xc000695cc0) (5) Data frame handling\nI0506 23:08:55.104188 2020 log.go:172] (0xc0001f0d10) Data frame received for 3\nI0506 23:08:55.104197 2020 log.go:172] (0xc0005e0640) (3) Data frame handling\n+ nc -zv -t -w 2 10.101.58.56 80\nConnection to 10.101.58.56 80 port [tcp/http] succeeded!\nI0506 23:08:55.105686 2020 log.go:172] (0xc0001f0d10) Data frame received for 1\nI0506 23:08:55.105713 2020 log.go:172] (0xc000695c20) (1) Data frame handling\nI0506 23:08:55.105724 2020 log.go:172] (0xc000695c20) (1) Data frame sent\nI0506 23:08:55.105739 2020 log.go:172] (0xc0001f0d10) (0xc000695c20) Stream removed, broadcasting: 1\nI0506 23:08:55.105756 2020 log.go:172] (0xc0001f0d10) Go away received\nI0506 23:08:55.106101 2020 log.go:172] (0xc0001f0d10) (0xc000695c20) Stream removed, broadcasting: 1\nI0506 23:08:55.106122 2020 log.go:172] (0xc0001f0d10) (0xc0005e0640) Stream removed, broadcasting: 3\nI0506 23:08:55.106146 2020 log.go:172] (0xc0001f0d10) (0xc000695cc0) Stream removed, broadcasting: 5\n" May 6 23:08:55.111: INFO: stdout: "" May 6 23:08:55.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9667 execpod5kc95 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30510' May 6 23:08:55.294: INFO: stderr: "I0506 23:08:55.228480 2038 log.go:172] (0xc0001042c0) (0xc0006fd4a0) Create stream\nI0506 23:08:55.228535 2038 log.go:172] (0xc0001042c0) (0xc0006fd4a0) Stream added, broadcasting: 1\nI0506 23:08:55.230346 2038 log.go:172] (0xc0001042c0) Reply frame received for 1\nI0506 23:08:55.230385 2038 log.go:172] (0xc0001042c0) (0xc0009be000) Create stream\nI0506 23:08:55.230402 2038 log.go:172] (0xc0001042c0) (0xc0009be000) Stream added, broadcasting: 3\nI0506 23:08:55.231076 2038 log.go:172] (0xc0001042c0) Reply frame received for 3\nI0506 23:08:55.231105 2038 log.go:172] (0xc0001042c0) (0xc00063da40) Create stream\nI0506 23:08:55.231112 2038 log.go:172] (0xc0001042c0) (0xc00063da40) Stream added, broadcasting: 5\nI0506 23:08:55.231855 2038 log.go:172] (0xc0001042c0) Reply frame received for 5\nI0506 23:08:55.286940 2038 log.go:172] (0xc0001042c0) Data frame received for 5\nI0506 23:08:55.286977 2038 log.go:172] (0xc00063da40) (5) Data frame handling\nI0506 23:08:55.286997 2038 log.go:172] (0xc00063da40) (5) Data frame sent\nI0506 23:08:55.287018 2038 log.go:172] (0xc0001042c0) Data frame received for 5\nI0506 23:08:55.287028 2038 log.go:172] (0xc00063da40) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 30510\nConnection to 172.17.0.10 30510 port [tcp/30510] succeeded!\nI0506 23:08:55.287057 2038 log.go:172] (0xc00063da40) (5) Data frame sent\nI0506 23:08:55.287370 2038 log.go:172] (0xc0001042c0) Data frame received for 3\nI0506 23:08:55.287393 2038 log.go:172] (0xc0009be000) (3) Data frame handling\nI0506 23:08:55.287423 2038 log.go:172] (0xc0001042c0) Data frame received for 5\nI0506 23:08:55.287435 2038 log.go:172] (0xc00063da40) (5) Data frame handling\nI0506 23:08:55.289097 2038 log.go:172] (0xc0001042c0) Data frame received for 1\nI0506 23:08:55.289310 2038 log.go:172] (0xc0006fd4a0) (1) Data frame handling\nI0506 23:08:55.289345 2038 log.go:172] (0xc0006fd4a0) (1) Data frame sent\nI0506 23:08:55.289550 2038 log.go:172] (0xc0001042c0) (0xc0006fd4a0) Stream removed, broadcasting: 1\nI0506 23:08:55.289626 2038 log.go:172] (0xc0001042c0) Go away received\nI0506 23:08:55.289969 2038 log.go:172] (0xc0001042c0) (0xc0006fd4a0) Stream removed, broadcasting: 1\nI0506 23:08:55.289988 2038 log.go:172] (0xc0001042c0) (0xc0009be000) Stream removed, broadcasting: 3\nI0506 23:08:55.289999 2038 log.go:172] (0xc0001042c0) (0xc00063da40) Stream removed, broadcasting: 5\n" May 6 23:08:55.294: INFO: stdout: "" May 6 23:08:55.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9667 execpod5kc95 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30510' May 6 23:08:55.852: INFO: stderr: "I0506 23:08:55.788809 2057 log.go:172] (0xc000918000) (0xc000926000) Create stream\nI0506 23:08:55.788870 2057 log.go:172] (0xc000918000) (0xc000926000) Stream added, broadcasting: 1\nI0506 23:08:55.791003 2057 log.go:172] (0xc000918000) Reply frame received for 1\nI0506 23:08:55.791033 2057 log.go:172] (0xc000918000) (0xc000b58000) Create stream\nI0506 23:08:55.791042 2057 log.go:172] (0xc000918000) (0xc000b58000) Stream added, broadcasting: 3\nI0506 23:08:55.791644 2057 log.go:172] (0xc000918000) Reply frame received for 3\nI0506 23:08:55.791669 2057 log.go:172] (0xc000918000) (0xc0009260a0) Create stream\nI0506 23:08:55.791676 2057 log.go:172] (0xc000918000) (0xc0009260a0) Stream added, broadcasting: 5\nI0506 23:08:55.792328 2057 log.go:172] (0xc000918000) Reply frame received for 5\nI0506 23:08:55.846162 2057 log.go:172] (0xc000918000) Data frame received for 5\nI0506 23:08:55.846220 2057 log.go:172] (0xc0009260a0) (5) Data frame handling\nI0506 23:08:55.846262 2057 log.go:172] (0xc0009260a0) (5) Data frame sent\nI0506 23:08:55.846296 2057 log.go:172] (0xc000918000) Data frame received for 5\nI0506 23:08:55.846322 2057 log.go:172] (0xc0009260a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30510\nConnection to 172.17.0.8 30510 port [tcp/30510] succeeded!\nI0506 23:08:55.846384 2057 log.go:172] (0xc0009260a0) (5) Data frame sent\nI0506 23:08:55.846534 2057 log.go:172] (0xc000918000) Data frame received for 5\nI0506 23:08:55.846573 2057 log.go:172] (0xc0009260a0) (5) Data frame handling\nI0506 23:08:55.846965 2057 log.go:172] (0xc000918000) Data frame received for 3\nI0506 23:08:55.846986 2057 log.go:172] (0xc000b58000) (3) Data frame handling\nI0506 23:08:55.848735 2057 log.go:172] (0xc000918000) Data frame received for 1\nI0506 23:08:55.848751 2057 log.go:172] (0xc000926000) (1) Data frame handling\nI0506 23:08:55.848759 2057 log.go:172] (0xc000926000) (1) Data frame sent\nI0506 23:08:55.848768 2057 log.go:172] (0xc000918000) (0xc000926000) Stream removed, broadcasting: 1\nI0506 23:08:55.848806 2057 log.go:172] (0xc000918000) Go away received\nI0506 23:08:55.849028 2057 log.go:172] (0xc000918000) (0xc000926000) Stream removed, broadcasting: 1\nI0506 23:08:55.849041 2057 log.go:172] (0xc000918000) (0xc000b58000) Stream removed, broadcasting: 3\nI0506 23:08:55.849047 2057 log.go:172] (0xc000918000) (0xc0009260a0) Stream removed, broadcasting: 5\n" May 6 23:08:55.852: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:08:55.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9667" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:13.041 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":71,"skipped":1073,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:08:56.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 6 23:08:56.152: INFO: >>> kubeConfig: /root/.kube/config May 6 23:08:59.167: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:09:11.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7519" for this suite. • [SLOW TEST:15.233 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":72,"skipped":1119,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:09:11.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:09:15.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8364" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1125,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:09:15.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-1607500a-ad08-47ad-a43b-4b143248192f STEP: Creating a pod to test consume secrets May 6 23:09:15.737: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b76cc968-9ae4-4fcd-bb1e-66d6951ed92a" in namespace "projected-2016" to be "success or failure" May 6 23:09:15.752: INFO: Pod "pod-projected-secrets-b76cc968-9ae4-4fcd-bb1e-66d6951ed92a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.386729ms May 6 23:09:17.756: INFO: Pod "pod-projected-secrets-b76cc968-9ae4-4fcd-bb1e-66d6951ed92a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018577776s May 6 23:09:19.916: INFO: Pod "pod-projected-secrets-b76cc968-9ae4-4fcd-bb1e-66d6951ed92a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.178483785s STEP: Saw pod success May 6 23:09:19.916: INFO: Pod "pod-projected-secrets-b76cc968-9ae4-4fcd-bb1e-66d6951ed92a" satisfied condition "success or failure" May 6 23:09:19.921: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-b76cc968-9ae4-4fcd-bb1e-66d6951ed92a container projected-secret-volume-test: STEP: delete the pod May 6 23:09:20.062: INFO: Waiting for pod pod-projected-secrets-b76cc968-9ae4-4fcd-bb1e-66d6951ed92a to disappear May 6 23:09:20.070: INFO: Pod pod-projected-secrets-b76cc968-9ae4-4fcd-bb1e-66d6951ed92a no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:09:20.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2016" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1152,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:09:20.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 6 23:09:20.524: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:20.599: INFO: Number of nodes with available pods: 0 May 6 23:09:20.599: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:21.654: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:21.659: INFO: Number of nodes with available pods: 0 May 6 23:09:21.659: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:22.977: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:22.980: INFO: Number of nodes with available pods: 0 May 6 23:09:22.980: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:23.618: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:23.623: INFO: Number of nodes with available pods: 0 May 6 23:09:23.623: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:24.686: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:24.691: INFO: Number of nodes with available pods: 0 May 6 23:09:24.691: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:25.751: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:25.811: INFO: Number of nodes with available pods: 1 May 6 23:09:25.811: INFO: Node jerma-worker2 is running more than one daemon pod May 6 23:09:26.608: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:26.612: INFO: Number of nodes with available pods: 1 May 6 23:09:26.612: INFO: Node jerma-worker2 is running more than one daemon pod May 6 23:09:27.605: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:27.609: INFO: Number of nodes with available pods: 2 May 6 23:09:27.609: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 6 23:09:27.679: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:27.682: INFO: Number of nodes with available pods: 1 May 6 23:09:27.682: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:28.739: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:28.742: INFO: Number of nodes with available pods: 1 May 6 23:09:28.742: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:29.687: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:29.713: INFO: Number of nodes with available pods: 1 May 6 23:09:29.713: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:30.687: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:30.691: INFO: Number of nodes with available pods: 1 May 6 23:09:30.691: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:31.688: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:31.691: INFO: Number of nodes with available pods: 1 May 6 23:09:31.692: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:32.687: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:32.690: INFO: Number of nodes with available pods: 1 May 6 23:09:32.690: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:33.688: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:33.691: INFO: Number of nodes with available pods: 1 May 6 23:09:33.691: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:34.688: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:34.691: INFO: Number of nodes with available pods: 1 May 6 23:09:34.691: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:35.686: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:35.689: INFO: Number of nodes with available pods: 1 May 6 23:09:35.689: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:36.687: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:36.690: INFO: Number of nodes with available pods: 1 May 6 23:09:36.690: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:38.068: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:38.071: INFO: Number of nodes with available pods: 1 May 6 23:09:38.071: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:38.695: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:38.699: INFO: Number of nodes with available pods: 1 May 6 23:09:38.699: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:39.687: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:39.691: INFO: Number of nodes with available pods: 1 May 6 23:09:39.691: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:40.954: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:41.194: INFO: Number of nodes with available pods: 1 May 6 23:09:41.194: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:41.701: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:41.924: INFO: Number of nodes with available pods: 1 May 6 23:09:41.924: INFO: Node jerma-worker is running more than one daemon pod May 6 23:09:42.703: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:09:42.707: INFO: Number of nodes with available pods: 2 May 6 23:09:42.707: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8707, will wait for the garbage collector to delete the pods May 6 23:09:42.774: INFO: Deleting DaemonSet.extensions daemon-set took: 10.142649ms May 6 23:09:43.174: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.59555ms May 6 23:09:47.678: INFO: Number of nodes with available pods: 0 May 6 23:09:47.678: INFO: Number of running nodes: 0, number of available pods: 0 May 6 23:09:47.682: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8707/daemonsets","resourceVersion":"14023225"},"items":null} May 6 23:09:47.684: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8707/pods","resourceVersion":"14023225"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:09:47.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8707" for this suite. • [SLOW TEST:27.621 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":75,"skipped":1168,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:09:47.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-a8e79d38-9f3c-46b9-8013-5878626af0ca STEP: Creating secret with name s-test-opt-upd-d56e4bde-f60c-45bf-b817-224fc74df7c8 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a8e79d38-9f3c-46b9-8013-5878626af0ca STEP: Updating secret s-test-opt-upd-d56e4bde-f60c-45bf-b817-224fc74df7c8 STEP: Creating secret with name s-test-opt-create-595fc4a2-4f2a-4c9c-abc2-85f6910ac9fe STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:09:58.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2979" for this suite. • [SLOW TEST:10.755 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1175,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:09:58.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 23:10:00.645: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 23:10:02.656: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403400, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403400, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403400, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403400, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:10:04.821: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403400, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403400, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403400, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403400, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 23:10:07.708: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:10:07.712: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4834-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:10:09.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5183" for this suite. STEP: Destroying namespace "webhook-5183-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.769 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":77,"skipped":1186,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:10:09.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:10:25.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8943" for this suite. • [SLOW TEST:16.672 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":78,"skipped":1202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:10:25.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:10:26.053: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 6 23:10:26.121: INFO: Number of nodes with available pods: 0 May 6 23:10:26.121: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 6 23:10:26.166: INFO: Number of nodes with available pods: 0 May 6 23:10:26.166: INFO: Node jerma-worker is running more than one daemon pod May 6 23:10:27.175: INFO: Number of nodes with available pods: 0 May 6 23:10:27.175: INFO: Node jerma-worker is running more than one daemon pod May 6 23:10:28.528: INFO: Number of nodes with available pods: 0 May 6 23:10:28.528: INFO: Node jerma-worker is running more than one daemon pod May 6 23:10:29.492: INFO: Number of nodes with available pods: 0 May 6 23:10:29.493: INFO: Node jerma-worker is running more than one daemon pod May 6 23:10:30.171: INFO: Number of nodes with available pods: 0 May 6 23:10:30.171: INFO: Node jerma-worker is running more than one daemon pod May 6 23:10:31.481: INFO: Number of nodes with available pods: 0 May 6 23:10:31.481: INFO: Node jerma-worker is running more than one daemon pod May 6 23:10:32.337: INFO: Number of nodes with available pods: 0 May 6 23:10:32.337: INFO: Node jerma-worker is running more than one daemon pod May 6 23:10:33.344: INFO: Number of nodes with available pods: 1 May 6 23:10:33.344: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 6 23:10:34.020: INFO: Number of nodes with available pods: 1 May 6 23:10:34.020: INFO: Number of running nodes: 0, number of available pods: 1 May 6 23:10:35.030: INFO: Number of nodes with available pods: 0 May 6 23:10:35.030: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 6 23:10:35.281: INFO: Number of nodes with available pods: 0 May 6 23:10:35.281: INFO: Node jerma-worker is running more than one daemon pod May 6 23:10:36.285: INFO: Number of nodes with available pods: 0 May 6 23:10:36.285: INFO: Node jerma-worker is running more than one daemon pod May 6 23:10:37.284: INFO: Number of nodes with available pods: 0 May 6 23:10:37.284: INFO: Node jerma-worker is running more than one daemon pod May 6 23:10:38.415: INFO: Number of nodes with available pods: 0 May 6 23:10:38.415: INFO: Node jerma-worker is running more than one daemon pod May 6 23:10:39.285: INFO: Number of nodes with available pods: 0 May 6 23:10:39.285: INFO: Node jerma-worker is running more than one daemon pod May 6 23:10:40.319: INFO: Number of nodes with available pods: 0 May 6 23:10:40.319: INFO: Node jerma-worker is running more than one daemon pod May 6 23:10:41.284: INFO: Number of nodes with available pods: 0 May 6 23:10:41.284: INFO: Node jerma-worker is running more than one daemon pod May 6 23:10:42.285: INFO: Number of nodes with available pods: 1 May 6 23:10:42.285: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1809, will wait for the garbage collector to delete the pods May 6 23:10:42.380: INFO: Deleting DaemonSet.extensions daemon-set took: 36.142828ms May 6 23:10:42.680: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.242332ms May 6 23:10:45.784: INFO: Number of nodes with available pods: 0 May 6 23:10:45.784: INFO: Number of running nodes: 0, number of available pods: 0 May 6 23:10:45.787: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1809/daemonsets","resourceVersion":"14023631"},"items":null} May 6 23:10:45.790: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1809/pods","resourceVersion":"14023631"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:10:45.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1809" for this suite. • [SLOW TEST:19.930 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":79,"skipped":1227,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:10:45.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 6 23:10:45.904: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. May 6 23:10:46.443: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 6 23:10:49.362: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403446, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403446, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403446, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403446, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:10:51.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403446, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403446, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403446, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403446, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:10:53.894: INFO: Waited 522.434212ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:10:56.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-628" for this suite. • [SLOW TEST:10.990 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":80,"skipped":1243,"failed":0} S ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:10:56.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-2320/configmap-test-994c44c3-2ca9-4ae8-a0e5-e60d68fedc33 STEP: Creating a pod to test consume configMaps May 6 23:10:57.229: INFO: Waiting up to 5m0s for pod "pod-configmaps-4169e18d-ec82-4eb5-a91a-ddd8760b6839" in namespace "configmap-2320" to be "success or failure" May 6 23:10:57.331: INFO: Pod "pod-configmaps-4169e18d-ec82-4eb5-a91a-ddd8760b6839": Phase="Pending", Reason="", readiness=false. Elapsed: 101.87662ms May 6 23:10:59.335: INFO: Pod "pod-configmaps-4169e18d-ec82-4eb5-a91a-ddd8760b6839": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105426845s May 6 23:11:01.339: INFO: Pod "pod-configmaps-4169e18d-ec82-4eb5-a91a-ddd8760b6839": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109991564s STEP: Saw pod success May 6 23:11:01.339: INFO: Pod "pod-configmaps-4169e18d-ec82-4eb5-a91a-ddd8760b6839" satisfied condition "success or failure" May 6 23:11:01.342: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-4169e18d-ec82-4eb5-a91a-ddd8760b6839 container env-test: STEP: delete the pod May 6 23:11:01.400: INFO: Waiting for pod pod-configmaps-4169e18d-ec82-4eb5-a91a-ddd8760b6839 to disappear May 6 23:11:01.410: INFO: Pod pod-configmaps-4169e18d-ec82-4eb5-a91a-ddd8760b6839 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:11:01.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2320" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1244,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:11:01.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 6 23:11:05.625: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 6 23:11:20.831: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:11:20.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1209" for this suite. • [SLOW TEST:19.679 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":82,"skipped":1259,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:11:21.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:11:25.749: INFO: Waiting up to 5m0s for pod "client-envvars-41ccc38e-c1ab-4a7d-b24e-2457dd0da12b" in namespace "pods-1815" to be "success or failure" May 6 23:11:25.755: INFO: Pod "client-envvars-41ccc38e-c1ab-4a7d-b24e-2457dd0da12b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.83882ms May 6 23:11:27.759: INFO: Pod "client-envvars-41ccc38e-c1ab-4a7d-b24e-2457dd0da12b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009950547s May 6 23:11:29.763: INFO: Pod "client-envvars-41ccc38e-c1ab-4a7d-b24e-2457dd0da12b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013364727s STEP: Saw pod success May 6 23:11:29.763: INFO: Pod "client-envvars-41ccc38e-c1ab-4a7d-b24e-2457dd0da12b" satisfied condition "success or failure" May 6 23:11:29.766: INFO: Trying to get logs from node jerma-worker pod client-envvars-41ccc38e-c1ab-4a7d-b24e-2457dd0da12b container env3cont: STEP: delete the pod May 6 23:11:29.796: INFO: Waiting for pod client-envvars-41ccc38e-c1ab-4a7d-b24e-2457dd0da12b to disappear May 6 23:11:29.800: INFO: Pod client-envvars-41ccc38e-c1ab-4a7d-b24e-2457dd0da12b no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:11:29.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1815" for this suite. • [SLOW TEST:8.711 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1270,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:11:29.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 23:11:29.910: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df36916b-d792-428e-82c3-37b24608ed7f" in namespace "projected-7557" to be "success or failure" May 6 23:11:29.928: INFO: Pod "downwardapi-volume-df36916b-d792-428e-82c3-37b24608ed7f": Phase="Pending", Reason="", readiness=false. Elapsed: 18.356665ms May 6 23:11:31.933: INFO: Pod "downwardapi-volume-df36916b-d792-428e-82c3-37b24608ed7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023257939s May 6 23:11:33.937: INFO: Pod "downwardapi-volume-df36916b-d792-428e-82c3-37b24608ed7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027588191s STEP: Saw pod success May 6 23:11:33.937: INFO: Pod "downwardapi-volume-df36916b-d792-428e-82c3-37b24608ed7f" satisfied condition "success or failure" May 6 23:11:33.940: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-df36916b-d792-428e-82c3-37b24608ed7f container client-container: STEP: delete the pod May 6 23:11:33.995: INFO: Waiting for pod downwardapi-volume-df36916b-d792-428e-82c3-37b24608ed7f to disappear May 6 23:11:34.022: INFO: Pod downwardapi-volume-df36916b-d792-428e-82c3-37b24608ed7f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:11:34.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7557" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1273,"failed":0} ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:11:34.030: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 23:11:34.431: INFO: Waiting up to 5m0s for pod "downwardapi-volume-63363f69-2e5c-4c55-83c3-a2fc294dceba" in namespace "downward-api-7324" to be "success or failure" May 6 23:11:34.442: INFO: Pod "downwardapi-volume-63363f69-2e5c-4c55-83c3-a2fc294dceba": Phase="Pending", Reason="", readiness=false. Elapsed: 10.717301ms May 6 23:11:36.462: INFO: Pod "downwardapi-volume-63363f69-2e5c-4c55-83c3-a2fc294dceba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030999857s May 6 23:11:38.475: INFO: Pod "downwardapi-volume-63363f69-2e5c-4c55-83c3-a2fc294dceba": Phase="Running", Reason="", readiness=true. Elapsed: 4.043511488s May 6 23:11:40.479: INFO: Pod "downwardapi-volume-63363f69-2e5c-4c55-83c3-a2fc294dceba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047663561s STEP: Saw pod success May 6 23:11:40.479: INFO: Pod "downwardapi-volume-63363f69-2e5c-4c55-83c3-a2fc294dceba" satisfied condition "success or failure" May 6 23:11:40.482: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-63363f69-2e5c-4c55-83c3-a2fc294dceba container client-container: STEP: delete the pod May 6 23:11:40.545: INFO: Waiting for pod downwardapi-volume-63363f69-2e5c-4c55-83c3-a2fc294dceba to disappear May 6 23:11:40.563: INFO: Pod downwardapi-volume-63363f69-2e5c-4c55-83c3-a2fc294dceba no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:11:40.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7324" for this suite. • [SLOW TEST:6.541 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1273,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:11:40.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:11:40.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8485" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":86,"skipped":1288,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:11:40.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-119bf144-2cd6-4cdb-9428-70994cf57dfe STEP: Creating a pod to test consume secrets May 6 23:11:40.811: INFO: Waiting up to 5m0s for pod "pod-secrets-e2e96b0d-4138-46dc-a97b-9c99cddc083f" in namespace "secrets-1766" to be "success or failure" May 6 23:11:40.840: INFO: Pod "pod-secrets-e2e96b0d-4138-46dc-a97b-9c99cddc083f": Phase="Pending", Reason="", readiness=false. Elapsed: 29.427454ms May 6 23:11:42.844: INFO: Pod "pod-secrets-e2e96b0d-4138-46dc-a97b-9c99cddc083f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03332791s May 6 23:11:44.848: INFO: Pod "pod-secrets-e2e96b0d-4138-46dc-a97b-9c99cddc083f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037050475s May 6 23:11:46.870: INFO: Pod "pod-secrets-e2e96b0d-4138-46dc-a97b-9c99cddc083f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059841151s STEP: Saw pod success May 6 23:11:46.870: INFO: Pod "pod-secrets-e2e96b0d-4138-46dc-a97b-9c99cddc083f" satisfied condition "success or failure" May 6 23:11:46.873: INFO: Trying to get logs from node jerma-worker pod pod-secrets-e2e96b0d-4138-46dc-a97b-9c99cddc083f container secret-volume-test: STEP: delete the pod May 6 23:11:46.938: INFO: Waiting for pod pod-secrets-e2e96b0d-4138-46dc-a97b-9c99cddc083f to disappear May 6 23:11:46.941: INFO: Pod pod-secrets-e2e96b0d-4138-46dc-a97b-9c99cddc083f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:11:46.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1766" for this suite. • [SLOW TEST:6.223 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1312,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:11:46.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ebc40152-ad5d-424b-818f-98eb50e958cb STEP: Creating a pod to test consume secrets May 6 23:11:47.267: INFO: Waiting up to 5m0s for pod "pod-secrets-7a782aba-ece4-4ec6-9480-9416c726ca09" in namespace "secrets-9503" to be "success or failure" May 6 23:11:47.307: INFO: Pod "pod-secrets-7a782aba-ece4-4ec6-9480-9416c726ca09": Phase="Pending", Reason="", readiness=false. Elapsed: 39.864477ms May 6 23:11:49.344: INFO: Pod "pod-secrets-7a782aba-ece4-4ec6-9480-9416c726ca09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07647879s May 6 23:11:51.577: INFO: Pod "pod-secrets-7a782aba-ece4-4ec6-9480-9416c726ca09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.310292968s May 6 23:11:53.890: INFO: Pod "pod-secrets-7a782aba-ece4-4ec6-9480-9416c726ca09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.622481382s STEP: Saw pod success May 6 23:11:53.890: INFO: Pod "pod-secrets-7a782aba-ece4-4ec6-9480-9416c726ca09" satisfied condition "success or failure" May 6 23:11:54.446: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-7a782aba-ece4-4ec6-9480-9416c726ca09 container secret-volume-test: STEP: delete the pod May 6 23:11:54.665: INFO: Waiting for pod pod-secrets-7a782aba-ece4-4ec6-9480-9416c726ca09 to disappear May 6 23:11:54.734: INFO: Pod pod-secrets-7a782aba-ece4-4ec6-9480-9416c726ca09 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:11:54.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9503" for this suite. • [SLOW TEST:7.794 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1371,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:11:54.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 6 23:11:54.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7840' May 6 23:11:56.193: INFO: stderr: "" May 6 23:11:56.193: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 6 23:11:57.258: INFO: Selector matched 1 pods for map[app:agnhost] May 6 23:11:57.258: INFO: Found 0 / 1 May 6 23:11:58.197: INFO: Selector matched 1 pods for map[app:agnhost] May 6 23:11:58.197: INFO: Found 0 / 1 May 6 23:11:59.356: INFO: Selector matched 1 pods for map[app:agnhost] May 6 23:11:59.357: INFO: Found 0 / 1 May 6 23:12:00.326: INFO: Selector matched 1 pods for map[app:agnhost] May 6 23:12:00.326: INFO: Found 0 / 1 May 6 23:12:01.248: INFO: Selector matched 1 pods for map[app:agnhost] May 6 23:12:01.248: INFO: Found 0 / 1 May 6 23:12:02.197: INFO: Selector matched 1 pods for map[app:agnhost] May 6 23:12:02.197: INFO: Found 1 / 1 May 6 23:12:02.197: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 6 23:12:02.201: INFO: Selector matched 1 pods for map[app:agnhost] May 6 23:12:02.201: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 6 23:12:02.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-qrkhw --namespace=kubectl-7840 -p {"metadata":{"annotations":{"x":"y"}}}' May 6 23:12:02.309: INFO: stderr: "" May 6 23:12:02.309: INFO: stdout: "pod/agnhost-master-qrkhw patched\n" STEP: checking annotations May 6 23:12:02.458: INFO: Selector matched 1 pods for map[app:agnhost] May 6 23:12:02.458: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:12:02.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7840" for this suite. • [SLOW TEST:7.724 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1432 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":89,"skipped":1377,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:12:02.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 6 23:12:02.626: INFO: PodSpec: initContainers in spec.initContainers May 6 23:13:00.357: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-3f822d81-5797-4717-9ddc-20fac364a609", GenerateName:"", Namespace:"init-container-1497", SelfLink:"/api/v1/namespaces/init-container-1497/pods/pod-init-3f822d81-5797-4717-9ddc-20fac364a609", UID:"94dc2d68-3f0d-4ed3-9fab-27757c1e2632", ResourceVersion:"14024369", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724403522, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"626930169"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-zfwp5", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc004c16e80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zfwp5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zfwp5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zfwp5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0037c5138), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0036a97a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0037c51c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0037c51e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0037c51e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0037c51ec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403522, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403522, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403522, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403522, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.10", PodIP:"10.244.1.100", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.100"}}, StartTime:(*v1.Time)(0xc0026a9f60), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0019f8770)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0019f87e0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://71a00f97394039c1a559041e4a4f0e4330d004954e38f7b84972efb2ef7f1b25", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0026a9fa0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0026a9f80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0037c526f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:13:00.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1497" for this suite. • [SLOW TEST:57.899 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":90,"skipped":1395,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:13:00.367: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-8ea4bd19-69f7-473f-99d8-50edcb95cdd0 STEP: Creating secret with name s-test-opt-upd-23853a03-560b-48c2-842d-b009add75af4 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-8ea4bd19-69f7-473f-99d8-50edcb95cdd0 STEP: Updating secret s-test-opt-upd-23853a03-560b-48c2-842d-b009add75af4 STEP: Creating secret with name s-test-opt-create-c768ec76-7463-4d27-b7eb-e80320faaf02 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:13:15.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4213" for this suite. • [SLOW TEST:15.474 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1408,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:13:15.841: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 6 23:13:25.525: INFO: &Pod{ObjectMeta:{send-events-11058c45-c14f-4ce2-bc02-9ef069a5ec47 events-3928 /api/v1/namespaces/events-3928/pods/send-events-11058c45-c14f-4ce2-bc02-9ef069a5ec47 e62b9fc9-ee0f-4046-85bd-f9dba5f3fd94 14024487 0 2020-05-06 23:13:16 +0000 UTC map[name:foo time:927862296] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pfp2w,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pfp2w,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pfp2w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:13:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:13:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:13:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:13:16 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.101,StartTime:2020-05-06 23:13:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 23:13:22 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://3c3af6aeb57094c717e1ceb4642bd714beddbb766ab05d77af114ae8830b4aa4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.101,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 6 23:13:27.985: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 6 23:13:29.990: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:13:29.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3928" for this suite. • [SLOW TEST:15.495 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":92,"skipped":1422,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:13:31.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3381.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3381.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3381.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3381.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3381.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3381.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3381.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3381.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3381.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3381.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3381.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 6.166.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.166.6_udp@PTR;check="$$(dig +tcp +noall +answer +search 6.166.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.166.6_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3381.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3381.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3381.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3381.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3381.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3381.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3381.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3381.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3381.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3381.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3381.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 6.166.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.166.6_udp@PTR;check="$$(dig +tcp +noall +answer +search 6.166.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.166.6_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 23:13:44.102: INFO: Unable to read wheezy_udp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:44.104: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:44.107: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:44.110: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:44.129: INFO: Unable to read jessie_udp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:44.132: INFO: Unable to read jessie_tcp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:44.135: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:44.138: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:44.154: INFO: Lookups using dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf failed for: [wheezy_udp@dns-test-service.dns-3381.svc.cluster.local wheezy_tcp@dns-test-service.dns-3381.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local jessie_udp@dns-test-service.dns-3381.svc.cluster.local jessie_tcp@dns-test-service.dns-3381.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local] May 6 23:13:49.220: INFO: Unable to read wheezy_udp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:49.223: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:49.226: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:49.228: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:49.285: INFO: Unable to read jessie_udp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:49.287: INFO: Unable to read jessie_tcp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:49.292: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:49.296: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:49.310: INFO: Lookups using dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf failed for: [wheezy_udp@dns-test-service.dns-3381.svc.cluster.local wheezy_tcp@dns-test-service.dns-3381.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local jessie_udp@dns-test-service.dns-3381.svc.cluster.local jessie_tcp@dns-test-service.dns-3381.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local] May 6 23:13:54.172: INFO: Unable to read wheezy_udp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:54.175: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:54.179: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:54.182: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:54.203: INFO: Unable to read jessie_udp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:54.206: INFO: Unable to read jessie_tcp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:54.209: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:54.212: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:54.283: INFO: Lookups using dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf failed for: [wheezy_udp@dns-test-service.dns-3381.svc.cluster.local wheezy_tcp@dns-test-service.dns-3381.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local jessie_udp@dns-test-service.dns-3381.svc.cluster.local jessie_tcp@dns-test-service.dns-3381.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local] May 6 23:13:59.340: INFO: Unable to read wheezy_udp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:59.343: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:59.346: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:59.348: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:59.368: INFO: Unable to read jessie_udp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:59.370: INFO: Unable to read jessie_tcp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:59.372: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:59.375: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:13:59.390: INFO: Lookups using dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf failed for: [wheezy_udp@dns-test-service.dns-3381.svc.cluster.local wheezy_tcp@dns-test-service.dns-3381.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local jessie_udp@dns-test-service.dns-3381.svc.cluster.local jessie_tcp@dns-test-service.dns-3381.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local] May 6 23:14:04.159: INFO: Unable to read wheezy_udp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:14:04.163: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:14:04.167: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:14:04.170: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:14:04.191: INFO: Unable to read jessie_udp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:14:04.194: INFO: Unable to read jessie_tcp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:14:04.197: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:14:04.200: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:14:04.218: INFO: Lookups using dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf failed for: [wheezy_udp@dns-test-service.dns-3381.svc.cluster.local wheezy_tcp@dns-test-service.dns-3381.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local jessie_udp@dns-test-service.dns-3381.svc.cluster.local jessie_tcp@dns-test-service.dns-3381.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local] May 6 23:14:09.158: INFO: Unable to read wheezy_udp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:14:09.160: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:14:09.163: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:14:09.165: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:14:09.220: INFO: Unable to read jessie_udp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:14:09.223: INFO: Unable to read jessie_tcp@dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:14:09.225: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:14:09.227: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local from pod dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf: the server could not find the requested resource (get pods dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf) May 6 23:14:09.302: INFO: Lookups using dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf failed for: [wheezy_udp@dns-test-service.dns-3381.svc.cluster.local wheezy_tcp@dns-test-service.dns-3381.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local jessie_udp@dns-test-service.dns-3381.svc.cluster.local jessie_tcp@dns-test-service.dns-3381.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3381.svc.cluster.local] May 6 23:14:14.214: INFO: DNS probes using dns-3381/dns-test-dfced71f-6f78-4096-9b60-0515d0da4fdf succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:14:15.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3381" for this suite. • [SLOW TEST:44.390 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":93,"skipped":1438,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:14:15.727: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:14:15.916: INFO: Creating ReplicaSet my-hostname-basic-25cf5b10-cfd9-45eb-8040-1ba0e64fab93 May 6 23:14:15.972: INFO: Pod name my-hostname-basic-25cf5b10-cfd9-45eb-8040-1ba0e64fab93: Found 0 pods out of 1 May 6 23:14:20.987: INFO: Pod name my-hostname-basic-25cf5b10-cfd9-45eb-8040-1ba0e64fab93: Found 1 pods out of 1 May 6 23:14:20.987: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-25cf5b10-cfd9-45eb-8040-1ba0e64fab93" is running May 6 23:14:23.064: INFO: Pod "my-hostname-basic-25cf5b10-cfd9-45eb-8040-1ba0e64fab93-w9tbv" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 23:14:16 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 23:14:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-25cf5b10-cfd9-45eb-8040-1ba0e64fab93]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 23:14:16 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-25cf5b10-cfd9-45eb-8040-1ba0e64fab93]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-06 23:14:15 +0000 UTC Reason: Message:}]) May 6 23:14:23.064: INFO: Trying to dial the pod May 6 23:14:28.076: INFO: Controller my-hostname-basic-25cf5b10-cfd9-45eb-8040-1ba0e64fab93: Got expected result from replica 1 [my-hostname-basic-25cf5b10-cfd9-45eb-8040-1ba0e64fab93-w9tbv]: "my-hostname-basic-25cf5b10-cfd9-45eb-8040-1ba0e64fab93-w9tbv", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:14:28.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2294" for this suite. • [SLOW TEST:12.358 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":94,"skipped":1448,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:14:28.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 23:14:28.610: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cf5dc195-1051-40ae-afd7-40dff2ac61e8" in namespace "downward-api-8347" to be "success or failure" May 6 23:14:29.011: INFO: Pod "downwardapi-volume-cf5dc195-1051-40ae-afd7-40dff2ac61e8": Phase="Pending", Reason="", readiness=false. Elapsed: 400.955258ms May 6 23:14:31.047: INFO: Pod "downwardapi-volume-cf5dc195-1051-40ae-afd7-40dff2ac61e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.436497361s May 6 23:14:33.051: INFO: Pod "downwardapi-volume-cf5dc195-1051-40ae-afd7-40dff2ac61e8": Phase="Running", Reason="", readiness=true. Elapsed: 4.440555754s May 6 23:14:35.184: INFO: Pod "downwardapi-volume-cf5dc195-1051-40ae-afd7-40dff2ac61e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.574148403s STEP: Saw pod success May 6 23:14:35.184: INFO: Pod "downwardapi-volume-cf5dc195-1051-40ae-afd7-40dff2ac61e8" satisfied condition "success or failure" May 6 23:14:35.188: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-cf5dc195-1051-40ae-afd7-40dff2ac61e8 container client-container: STEP: delete the pod May 6 23:14:35.326: INFO: Waiting for pod downwardapi-volume-cf5dc195-1051-40ae-afd7-40dff2ac61e8 to disappear May 6 23:14:35.586: INFO: Pod downwardapi-volume-cf5dc195-1051-40ae-afd7-40dff2ac61e8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:14:35.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8347" for this suite. • [SLOW TEST:7.549 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1460,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:14:35.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 6 23:14:35.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5447' May 6 23:14:36.216: INFO: stderr: "" May 6 23:14:36.216: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 May 6 23:14:36.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5447' May 6 23:14:39.562: INFO: stderr: "" May 6 23:14:39.562: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:14:39.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5447" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":96,"skipped":1501,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:14:39.692: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:14:40.477: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"0982b06f-96d7-4b29-b5d2-98003a7770e3", Controller:(*bool)(0xc0045eb3fa), BlockOwnerDeletion:(*bool)(0xc0045eb3fb)}} May 6 23:14:40.724: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"3e161fe2-da3d-496f-8340-30f0e816bb49", Controller:(*bool)(0xc000c8edf2), BlockOwnerDeletion:(*bool)(0xc000c8edf3)}} May 6 23:14:40.772: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"d8ca0dfc-2d96-4e43-a774-187edf27ec49", Controller:(*bool)(0xc0053878da), BlockOwnerDeletion:(*bool)(0xc0053878db)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:14:45.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1618" for this suite. • [SLOW TEST:6.224 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":97,"skipped":1510,"failed":0} SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:14:45.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 23:14:46.164: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c06c22ef-25e3-4895-bf9d-aae448f06932" in namespace "projected-4458" to be "success or failure" May 6 23:14:46.203: INFO: Pod "downwardapi-volume-c06c22ef-25e3-4895-bf9d-aae448f06932": Phase="Pending", Reason="", readiness=false. Elapsed: 39.333045ms May 6 23:14:48.436: INFO: Pod "downwardapi-volume-c06c22ef-25e3-4895-bf9d-aae448f06932": Phase="Pending", Reason="", readiness=false. Elapsed: 2.27222114s May 6 23:14:50.483: INFO: Pod "downwardapi-volume-c06c22ef-25e3-4895-bf9d-aae448f06932": Phase="Pending", Reason="", readiness=false. Elapsed: 4.319585436s May 6 23:14:52.538: INFO: Pod "downwardapi-volume-c06c22ef-25e3-4895-bf9d-aae448f06932": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.374724806s STEP: Saw pod success May 6 23:14:52.539: INFO: Pod "downwardapi-volume-c06c22ef-25e3-4895-bf9d-aae448f06932" satisfied condition "success or failure" May 6 23:14:52.574: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c06c22ef-25e3-4895-bf9d-aae448f06932 container client-container: STEP: delete the pod May 6 23:14:53.351: INFO: Waiting for pod downwardapi-volume-c06c22ef-25e3-4895-bf9d-aae448f06932 to disappear May 6 23:14:53.419: INFO: Pod downwardapi-volume-c06c22ef-25e3-4895-bf9d-aae448f06932 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:14:53.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4458" for this suite. • [SLOW TEST:7.822 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":98,"skipped":1513,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:14:53.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0506 23:15:05.749595 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 23:15:05.749: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:15:05.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-518" for this suite. • [SLOW TEST:12.155 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":99,"skipped":1527,"failed":0} SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:15:05.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:15:06.360: INFO: Create a RollingUpdate DaemonSet May 6 23:15:06.363: INFO: Check that daemon pods launch on every node of the cluster May 6 23:15:06.399: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:15:06.412: INFO: Number of nodes with available pods: 0 May 6 23:15:06.412: INFO: Node jerma-worker is running more than one daemon pod May 6 23:15:07.419: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:15:07.422: INFO: Number of nodes with available pods: 0 May 6 23:15:07.422: INFO: Node jerma-worker is running more than one daemon pod May 6 23:15:08.839: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:15:08.843: INFO: Number of nodes with available pods: 0 May 6 23:15:08.843: INFO: Node jerma-worker is running more than one daemon pod May 6 23:15:09.509: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:15:09.512: INFO: Number of nodes with available pods: 0 May 6 23:15:09.512: INFO: Node jerma-worker is running more than one daemon pod May 6 23:15:10.443: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:15:10.446: INFO: Number of nodes with available pods: 0 May 6 23:15:10.446: INFO: Node jerma-worker is running more than one daemon pod May 6 23:15:11.418: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:15:11.421: INFO: Number of nodes with available pods: 1 May 6 23:15:11.421: INFO: Node jerma-worker is running more than one daemon pod May 6 23:15:12.719: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:15:12.740: INFO: Number of nodes with available pods: 2 May 6 23:15:12.740: INFO: Number of running nodes: 2, number of available pods: 2 May 6 23:15:12.740: INFO: Update the DaemonSet to trigger a rollout May 6 23:15:12.746: INFO: Updating DaemonSet daemon-set May 6 23:15:16.764: INFO: Roll back the DaemonSet before rollout is complete May 6 23:15:16.770: INFO: Updating DaemonSet daemon-set May 6 23:15:16.770: INFO: Make sure DaemonSet rollback is complete May 6 23:15:16.832: INFO: Wrong image for pod: daemon-set-nrhbt. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 6 23:15:16.832: INFO: Pod daemon-set-nrhbt is not available May 6 23:15:16.946: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:15:18.270: INFO: Wrong image for pod: daemon-set-nrhbt. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 6 23:15:18.270: INFO: Pod daemon-set-nrhbt is not available May 6 23:15:18.324: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:15:19.071: INFO: Pod daemon-set-frtf9 is not available May 6 23:15:19.076: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2006, will wait for the garbage collector to delete the pods May 6 23:15:19.220: INFO: Deleting DaemonSet.extensions daemon-set took: 5.624012ms May 6 23:15:19.320: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.222456ms May 6 23:15:30.400: INFO: Number of nodes with available pods: 0 May 6 23:15:30.400: INFO: Number of running nodes: 0, number of available pods: 0 May 6 23:15:30.403: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2006/daemonsets","resourceVersion":"14025171"},"items":null} May 6 23:15:30.412: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2006/pods","resourceVersion":"14025173"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:15:31.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2006" for this suite. • [SLOW TEST:25.114 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":100,"skipped":1535,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:15:31.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 6 23:15:32.790: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 6 23:15:34.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403733, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403733, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403733, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403732, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:15:36.850: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403733, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403733, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403733, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724403732, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 23:15:40.004: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:15:40.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:15:41.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-5864" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:11.588 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":101,"skipped":1538,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:15:42.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 6 23:15:43.710: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5731 /api/v1/namespaces/watch-5731/configmaps/e2e-watch-test-label-changed 12be91b6-e22e-4325-9ed7-fea16b3e191f 14025290 0 2020-05-06 23:15:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 6 23:15:43.710: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5731 /api/v1/namespaces/watch-5731/configmaps/e2e-watch-test-label-changed 12be91b6-e22e-4325-9ed7-fea16b3e191f 14025293 0 2020-05-06 23:15:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 6 23:15:43.710: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5731 /api/v1/namespaces/watch-5731/configmaps/e2e-watch-test-label-changed 12be91b6-e22e-4325-9ed7-fea16b3e191f 14025295 0 2020-05-06 23:15:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 6 23:15:54.020: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5731 /api/v1/namespaces/watch-5731/configmaps/e2e-watch-test-label-changed 12be91b6-e22e-4325-9ed7-fea16b3e191f 14025337 0 2020-05-06 23:15:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 6 23:15:54.020: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5731 /api/v1/namespaces/watch-5731/configmaps/e2e-watch-test-label-changed 12be91b6-e22e-4325-9ed7-fea16b3e191f 14025338 0 2020-05-06 23:15:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 6 23:15:54.020: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5731 /api/v1/namespaces/watch-5731/configmaps/e2e-watch-test-label-changed 12be91b6-e22e-4325-9ed7-fea16b3e191f 14025339 0 2020-05-06 23:15:43 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:15:54.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5731" for this suite. • [SLOW TEST:11.432 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":102,"skipped":1548,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:15:54.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-9c738219-3eb8-425b-a3a1-6c1e57374d95 STEP: Creating a pod to test consume secrets May 6 23:15:54.163: INFO: Waiting up to 5m0s for pod "pod-secrets-e34f1c90-149c-4d5c-b078-4a248dd70193" in namespace "secrets-803" to be "success or failure" May 6 23:15:54.251: INFO: Pod "pod-secrets-e34f1c90-149c-4d5c-b078-4a248dd70193": Phase="Pending", Reason="", readiness=false. Elapsed: 87.840271ms May 6 23:15:56.255: INFO: Pod "pod-secrets-e34f1c90-149c-4d5c-b078-4a248dd70193": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091292711s May 6 23:15:58.259: INFO: Pod "pod-secrets-e34f1c90-149c-4d5c-b078-4a248dd70193": Phase="Pending", Reason="", readiness=false. Elapsed: 4.095557779s May 6 23:16:00.263: INFO: Pod "pod-secrets-e34f1c90-149c-4d5c-b078-4a248dd70193": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.099221795s STEP: Saw pod success May 6 23:16:00.263: INFO: Pod "pod-secrets-e34f1c90-149c-4d5c-b078-4a248dd70193" satisfied condition "success or failure" May 6 23:16:00.266: INFO: Trying to get logs from node jerma-worker pod pod-secrets-e34f1c90-149c-4d5c-b078-4a248dd70193 container secret-volume-test: STEP: delete the pod May 6 23:16:00.512: INFO: Waiting for pod pod-secrets-e34f1c90-149c-4d5c-b078-4a248dd70193 to disappear May 6 23:16:00.557: INFO: Pod pod-secrets-e34f1c90-149c-4d5c-b078-4a248dd70193 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:16:00.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-803" for this suite. • [SLOW TEST:6.537 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1562,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:16:00.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 6 23:16:00.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6035' May 6 23:16:14.434: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 6 23:16:14.434: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc May 6 23:16:14.467: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-xllbj] May 6 23:16:14.467: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-xllbj" in namespace "kubectl-6035" to be "running and ready" May 6 23:16:14.473: INFO: Pod "e2e-test-httpd-rc-xllbj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.247046ms May 6 23:16:16.830: INFO: Pod "e2e-test-httpd-rc-xllbj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.363490637s May 6 23:16:18.835: INFO: Pod "e2e-test-httpd-rc-xllbj": Phase="Pending", Reason="", readiness=false. Elapsed: 4.368319774s May 6 23:16:20.892: INFO: Pod "e2e-test-httpd-rc-xllbj": Phase="Pending", Reason="", readiness=false. Elapsed: 6.425584461s May 6 23:16:22.896: INFO: Pod "e2e-test-httpd-rc-xllbj": Phase="Running", Reason="", readiness=true. Elapsed: 8.429289814s May 6 23:16:22.896: INFO: Pod "e2e-test-httpd-rc-xllbj" satisfied condition "running and ready" May 6 23:16:22.896: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-xllbj] May 6 23:16:22.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-6035' May 6 23:16:23.173: INFO: stderr: "" May 6 23:16:23.173: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.15. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.2.15. Set the 'ServerName' directive globally to suppress this message\n[Wed May 06 23:16:19.718862 2020] [mpm_event:notice] [pid 1:tid 139719639169896] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Wed May 06 23:16:19.718916 2020] [core:notice] [pid 1:tid 139719639169896] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 May 6 23:16:23.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6035' May 6 23:16:23.298: INFO: stderr: "" May 6 23:16:23.298: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:16:23.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6035" for this suite. • [SLOW TEST:22.740 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1521 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":104,"skipped":1573,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:16:23.306: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 6 23:16:25.999: INFO: Waiting up to 5m0s for pod "pod-54864332-1615-4858-87ef-111fb580041a" in namespace "emptydir-1265" to be "success or failure" May 6 23:16:26.923: INFO: Pod "pod-54864332-1615-4858-87ef-111fb580041a": Phase="Pending", Reason="", readiness=false. Elapsed: 923.907615ms May 6 23:16:29.216: INFO: Pod "pod-54864332-1615-4858-87ef-111fb580041a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.217146128s May 6 23:16:32.091: INFO: Pod "pod-54864332-1615-4858-87ef-111fb580041a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092468118s May 6 23:16:34.156: INFO: Pod "pod-54864332-1615-4858-87ef-111fb580041a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.157133008s May 6 23:16:36.342: INFO: Pod "pod-54864332-1615-4858-87ef-111fb580041a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.343061287s May 6 23:16:38.346: INFO: Pod "pod-54864332-1615-4858-87ef-111fb580041a": Phase="Pending", Reason="", readiness=false. Elapsed: 12.347375896s May 6 23:16:40.767: INFO: Pod "pod-54864332-1615-4858-87ef-111fb580041a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.768449689s May 6 23:16:42.827: INFO: Pod "pod-54864332-1615-4858-87ef-111fb580041a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.827876402s May 6 23:16:44.929: INFO: Pod "pod-54864332-1615-4858-87ef-111fb580041a": Phase="Running", Reason="", readiness=true. Elapsed: 18.930538933s May 6 23:16:46.958: INFO: Pod "pod-54864332-1615-4858-87ef-111fb580041a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 20.959504924s STEP: Saw pod success May 6 23:16:46.958: INFO: Pod "pod-54864332-1615-4858-87ef-111fb580041a" satisfied condition "success or failure" May 6 23:16:46.961: INFO: Trying to get logs from node jerma-worker pod pod-54864332-1615-4858-87ef-111fb580041a container test-container: STEP: delete the pod May 6 23:16:48.606: INFO: Waiting for pod pod-54864332-1615-4858-87ef-111fb580041a to disappear May 6 23:16:48.950: INFO: Pod pod-54864332-1615-4858-87ef-111fb580041a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:16:48.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1265" for this suite. • [SLOW TEST:25.653 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1582,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:16:48.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8859 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8859 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8859 May 6 23:16:52.013: INFO: Found 0 stateful pods, waiting for 1 May 6 23:17:02.440: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 6 23:17:12.016: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 6 23:17:12.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8859 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 23:17:12.359: INFO: stderr: "I0506 23:17:12.137012 2247 log.go:172] (0xc000b03550) (0xc000aec320) Create stream\nI0506 23:17:12.137075 2247 log.go:172] (0xc000b03550) (0xc000aec320) Stream added, broadcasting: 1\nI0506 23:17:12.138966 2247 log.go:172] (0xc000b03550) Reply frame received for 1\nI0506 23:17:12.139033 2247 log.go:172] (0xc000b03550) (0xc000b180a0) Create stream\nI0506 23:17:12.139053 2247 log.go:172] (0xc000b03550) (0xc000b180a0) Stream added, broadcasting: 3\nI0506 23:17:12.139888 2247 log.go:172] (0xc000b03550) Reply frame received for 3\nI0506 23:17:12.139914 2247 log.go:172] (0xc000b03550) (0xc000aec3c0) Create stream\nI0506 23:17:12.139924 2247 log.go:172] (0xc000b03550) (0xc000aec3c0) Stream added, broadcasting: 5\nI0506 23:17:12.140932 2247 log.go:172] (0xc000b03550) Reply frame received for 5\nI0506 23:17:12.192239 2247 log.go:172] (0xc000b03550) Data frame received for 5\nI0506 23:17:12.192266 2247 log.go:172] (0xc000aec3c0) (5) Data frame handling\nI0506 23:17:12.192283 2247 log.go:172] (0xc000aec3c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 23:17:12.350579 2247 log.go:172] (0xc000b03550) Data frame received for 5\nI0506 23:17:12.350611 2247 log.go:172] (0xc000aec3c0) (5) Data frame handling\nI0506 23:17:12.350647 2247 log.go:172] (0xc000b03550) Data frame received for 3\nI0506 23:17:12.350659 2247 log.go:172] (0xc000b180a0) (3) Data frame handling\nI0506 23:17:12.350668 2247 log.go:172] (0xc000b180a0) (3) Data frame sent\nI0506 23:17:12.351316 2247 log.go:172] (0xc000b03550) Data frame received for 3\nI0506 23:17:12.351335 2247 log.go:172] (0xc000b180a0) (3) Data frame handling\nI0506 23:17:12.354286 2247 log.go:172] (0xc000b03550) Data frame received for 1\nI0506 23:17:12.354311 2247 log.go:172] (0xc000aec320) (1) Data frame handling\nI0506 23:17:12.354333 2247 log.go:172] (0xc000aec320) (1) Data frame sent\nI0506 23:17:12.354349 2247 log.go:172] (0xc000b03550) (0xc000aec320) Stream removed, broadcasting: 1\nI0506 23:17:12.354361 2247 log.go:172] (0xc000b03550) Go away received\nI0506 23:17:12.354760 2247 log.go:172] (0xc000b03550) (0xc000aec320) Stream removed, broadcasting: 1\nI0506 23:17:12.354790 2247 log.go:172] (0xc000b03550) (0xc000b180a0) Stream removed, broadcasting: 3\nI0506 23:17:12.354802 2247 log.go:172] (0xc000b03550) (0xc000aec3c0) Stream removed, broadcasting: 5\n" May 6 23:17:12.359: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 23:17:12.359: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 23:17:12.474: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 6 23:17:22.493: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 23:17:22.493: INFO: Waiting for statefulset status.replicas updated to 0 May 6 23:17:23.049: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999501s May 6 23:17:24.052: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.591165155s May 6 23:17:25.057: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.587876569s May 6 23:17:26.188: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.583609042s May 6 23:17:27.511: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.45200646s May 6 23:17:28.549: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.129270175s May 6 23:17:29.594: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.091305239s May 6 23:17:30.651: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.046680207s May 6 23:17:31.679: INFO: Verifying statefulset ss doesn't scale past 1 for another 988.801892ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8859 May 6 23:17:33.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8859 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:17:33.669: INFO: stderr: "I0506 23:17:33.604437 2267 log.go:172] (0xc0003c2000) (0xc000710960) Create stream\nI0506 23:17:33.604500 2267 log.go:172] (0xc0003c2000) (0xc000710960) Stream added, broadcasting: 1\nI0506 23:17:33.607387 2267 log.go:172] (0xc0003c2000) Reply frame received for 1\nI0506 23:17:33.607430 2267 log.go:172] (0xc0003c2000) (0xc0008b8aa0) Create stream\nI0506 23:17:33.607443 2267 log.go:172] (0xc0003c2000) (0xc0008b8aa0) Stream added, broadcasting: 3\nI0506 23:17:33.608280 2267 log.go:172] (0xc0003c2000) Reply frame received for 3\nI0506 23:17:33.608305 2267 log.go:172] (0xc0003c2000) (0xc000710a00) Create stream\nI0506 23:17:33.608313 2267 log.go:172] (0xc0003c2000) (0xc000710a00) Stream added, broadcasting: 5\nI0506 23:17:33.609003 2267 log.go:172] (0xc0003c2000) Reply frame received for 5\nI0506 23:17:33.662842 2267 log.go:172] (0xc0003c2000) Data frame received for 5\nI0506 23:17:33.662871 2267 log.go:172] (0xc000710a00) (5) Data frame handling\nI0506 23:17:33.662885 2267 log.go:172] (0xc000710a00) (5) Data frame sent\nI0506 23:17:33.662896 2267 log.go:172] (0xc0003c2000) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 23:17:33.662942 2267 log.go:172] (0xc0003c2000) Data frame received for 3\nI0506 23:17:33.662997 2267 log.go:172] (0xc0008b8aa0) (3) Data frame handling\nI0506 23:17:33.663031 2267 log.go:172] (0xc0008b8aa0) (3) Data frame sent\nI0506 23:17:33.663070 2267 log.go:172] (0xc0003c2000) Data frame received for 3\nI0506 23:17:33.663092 2267 log.go:172] (0xc0008b8aa0) (3) Data frame handling\nI0506 23:17:33.663109 2267 log.go:172] (0xc000710a00) (5) Data frame handling\nI0506 23:17:33.664261 2267 log.go:172] (0xc0003c2000) Data frame received for 1\nI0506 23:17:33.664287 2267 log.go:172] (0xc000710960) (1) Data frame handling\nI0506 23:17:33.664313 2267 log.go:172] (0xc000710960) (1) Data frame sent\nI0506 23:17:33.664335 2267 log.go:172] (0xc0003c2000) (0xc000710960) Stream removed, broadcasting: 1\nI0506 23:17:33.664365 2267 log.go:172] (0xc0003c2000) Go away received\nI0506 23:17:33.664715 2267 log.go:172] (0xc0003c2000) (0xc000710960) Stream removed, broadcasting: 1\nI0506 23:17:33.664741 2267 log.go:172] (0xc0003c2000) (0xc0008b8aa0) Stream removed, broadcasting: 3\nI0506 23:17:33.664756 2267 log.go:172] (0xc0003c2000) (0xc000710a00) Stream removed, broadcasting: 5\n" May 6 23:17:33.669: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 23:17:33.669: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 23:17:33.672: INFO: Found 1 stateful pods, waiting for 3 May 6 23:17:44.032: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 6 23:17:44.032: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 6 23:17:44.032: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false May 6 23:17:53.676: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 6 23:17:53.676: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 6 23:17:53.676: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 6 23:17:53.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8859 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 23:17:53.906: INFO: stderr: "I0506 23:17:53.845673 2287 log.go:172] (0xc0008e0000) (0xc0008d0000) Create stream\nI0506 23:17:53.845716 2287 log.go:172] (0xc0008e0000) (0xc0008d0000) Stream added, broadcasting: 1\nI0506 23:17:53.849438 2287 log.go:172] (0xc0008e0000) Reply frame received for 1\nI0506 23:17:53.849467 2287 log.go:172] (0xc0008e0000) (0xc000a94140) Create stream\nI0506 23:17:53.849476 2287 log.go:172] (0xc0008e0000) (0xc000a94140) Stream added, broadcasting: 3\nI0506 23:17:53.850119 2287 log.go:172] (0xc0008e0000) Reply frame received for 3\nI0506 23:17:53.850144 2287 log.go:172] (0xc0008e0000) (0xc000a941e0) Create stream\nI0506 23:17:53.850162 2287 log.go:172] (0xc0008e0000) (0xc000a941e0) Stream added, broadcasting: 5\nI0506 23:17:53.850782 2287 log.go:172] (0xc0008e0000) Reply frame received for 5\nI0506 23:17:53.901318 2287 log.go:172] (0xc0008e0000) Data frame received for 3\nI0506 23:17:53.901338 2287 log.go:172] (0xc000a94140) (3) Data frame handling\nI0506 23:17:53.901352 2287 log.go:172] (0xc000a94140) (3) Data frame sent\nI0506 23:17:53.901360 2287 log.go:172] (0xc0008e0000) Data frame received for 3\nI0506 23:17:53.901366 2287 log.go:172] (0xc000a94140) (3) Data frame handling\nI0506 23:17:53.901412 2287 log.go:172] (0xc0008e0000) Data frame received for 5\nI0506 23:17:53.901432 2287 log.go:172] (0xc000a941e0) (5) Data frame handling\nI0506 23:17:53.901446 2287 log.go:172] (0xc000a941e0) (5) Data frame sent\nI0506 23:17:53.901469 2287 log.go:172] (0xc0008e0000) Data frame received for 5\nI0506 23:17:53.901497 2287 log.go:172] (0xc000a941e0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 23:17:53.902558 2287 log.go:172] (0xc0008e0000) Data frame received for 1\nI0506 23:17:53.902575 2287 log.go:172] (0xc0008d0000) (1) Data frame handling\nI0506 23:17:53.902586 2287 log.go:172] (0xc0008d0000) (1) Data frame sent\nI0506 23:17:53.902604 2287 log.go:172] (0xc0008e0000) (0xc0008d0000) Stream removed, broadcasting: 1\nI0506 23:17:53.902725 2287 log.go:172] (0xc0008e0000) Go away received\nI0506 23:17:53.902891 2287 log.go:172] (0xc0008e0000) (0xc0008d0000) Stream removed, broadcasting: 1\nI0506 23:17:53.902917 2287 log.go:172] (0xc0008e0000) (0xc000a94140) Stream removed, broadcasting: 3\nI0506 23:17:53.902929 2287 log.go:172] (0xc0008e0000) (0xc000a941e0) Stream removed, broadcasting: 5\n" May 6 23:17:53.906: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 23:17:53.906: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 23:17:53.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8859 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 23:17:54.456: INFO: stderr: "I0506 23:17:54.038877 2304 log.go:172] (0xc00093e210) (0xc0008a4000) Create stream\nI0506 23:17:54.038922 2304 log.go:172] (0xc00093e210) (0xc0008a4000) Stream added, broadcasting: 1\nI0506 23:17:54.042630 2304 log.go:172] (0xc00093e210) Reply frame received for 1\nI0506 23:17:54.042678 2304 log.go:172] (0xc00093e210) (0xc0005ec6e0) Create stream\nI0506 23:17:54.042693 2304 log.go:172] (0xc00093e210) (0xc0005ec6e0) Stream added, broadcasting: 3\nI0506 23:17:54.043388 2304 log.go:172] (0xc00093e210) Reply frame received for 3\nI0506 23:17:54.043422 2304 log.go:172] (0xc00093e210) (0xc0006e94a0) Create stream\nI0506 23:17:54.043436 2304 log.go:172] (0xc00093e210) (0xc0006e94a0) Stream added, broadcasting: 5\nI0506 23:17:54.044107 2304 log.go:172] (0xc00093e210) Reply frame received for 5\nI0506 23:17:54.097980 2304 log.go:172] (0xc00093e210) Data frame received for 5\nI0506 23:17:54.098012 2304 log.go:172] (0xc0006e94a0) (5) Data frame handling\nI0506 23:17:54.098032 2304 log.go:172] (0xc0006e94a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 23:17:54.442790 2304 log.go:172] (0xc00093e210) Data frame received for 3\nI0506 23:17:54.442821 2304 log.go:172] (0xc0005ec6e0) (3) Data frame handling\nI0506 23:17:54.442831 2304 log.go:172] (0xc0005ec6e0) (3) Data frame sent\nI0506 23:17:54.442848 2304 log.go:172] (0xc00093e210) Data frame received for 5\nI0506 23:17:54.442855 2304 log.go:172] (0xc0006e94a0) (5) Data frame handling\nI0506 23:17:54.443016 2304 log.go:172] (0xc00093e210) Data frame received for 3\nI0506 23:17:54.443045 2304 log.go:172] (0xc0005ec6e0) (3) Data frame handling\nI0506 23:17:54.451922 2304 log.go:172] (0xc00093e210) Data frame received for 1\nI0506 23:17:54.452050 2304 log.go:172] (0xc0008a4000) (1) Data frame handling\nI0506 23:17:54.452150 2304 log.go:172] (0xc0008a4000) (1) Data frame sent\nI0506 23:17:54.452238 2304 log.go:172] (0xc00093e210) (0xc0008a4000) Stream removed, broadcasting: 1\nI0506 23:17:54.452331 2304 log.go:172] (0xc00093e210) Go away received\nI0506 23:17:54.452576 2304 log.go:172] (0xc00093e210) (0xc0008a4000) Stream removed, broadcasting: 1\nI0506 23:17:54.452654 2304 log.go:172] (0xc00093e210) (0xc0005ec6e0) Stream removed, broadcasting: 3\nI0506 23:17:54.452685 2304 log.go:172] (0xc00093e210) (0xc0006e94a0) Stream removed, broadcasting: 5\n" May 6 23:17:54.457: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 23:17:54.457: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 23:17:54.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8859 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 23:17:55.051: INFO: stderr: "I0506 23:17:54.823488 2324 log.go:172] (0xc0006c66e0) (0xc000724280) Create stream\nI0506 23:17:54.823560 2324 log.go:172] (0xc0006c66e0) (0xc000724280) Stream added, broadcasting: 1\nI0506 23:17:54.825786 2324 log.go:172] (0xc0006c66e0) Reply frame received for 1\nI0506 23:17:54.825852 2324 log.go:172] (0xc0006c66e0) (0xc0007279a0) Create stream\nI0506 23:17:54.825869 2324 log.go:172] (0xc0006c66e0) (0xc0007279a0) Stream added, broadcasting: 3\nI0506 23:17:54.826822 2324 log.go:172] (0xc0006c66e0) Reply frame received for 3\nI0506 23:17:54.826854 2324 log.go:172] (0xc0006c66e0) (0xc0005b7ae0) Create stream\nI0506 23:17:54.826862 2324 log.go:172] (0xc0006c66e0) (0xc0005b7ae0) Stream added, broadcasting: 5\nI0506 23:17:54.827531 2324 log.go:172] (0xc0006c66e0) Reply frame received for 5\nI0506 23:17:54.919659 2324 log.go:172] (0xc0006c66e0) Data frame received for 5\nI0506 23:17:54.919678 2324 log.go:172] (0xc0005b7ae0) (5) Data frame handling\nI0506 23:17:54.919691 2324 log.go:172] (0xc0005b7ae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 23:17:55.046042 2324 log.go:172] (0xc0006c66e0) Data frame received for 5\nI0506 23:17:55.046071 2324 log.go:172] (0xc0005b7ae0) (5) Data frame handling\nI0506 23:17:55.046090 2324 log.go:172] (0xc0006c66e0) Data frame received for 3\nI0506 23:17:55.046097 2324 log.go:172] (0xc0007279a0) (3) Data frame handling\nI0506 23:17:55.046106 2324 log.go:172] (0xc0007279a0) (3) Data frame sent\nI0506 23:17:55.046118 2324 log.go:172] (0xc0006c66e0) Data frame received for 3\nI0506 23:17:55.046123 2324 log.go:172] (0xc0007279a0) (3) Data frame handling\nI0506 23:17:55.047665 2324 log.go:172] (0xc0006c66e0) Data frame received for 1\nI0506 23:17:55.047729 2324 log.go:172] (0xc000724280) (1) Data frame handling\nI0506 23:17:55.047821 2324 log.go:172] (0xc000724280) (1) Data frame sent\nI0506 23:17:55.047859 2324 log.go:172] (0xc0006c66e0) (0xc000724280) Stream removed, broadcasting: 1\nI0506 23:17:55.047978 2324 log.go:172] (0xc0006c66e0) Go away received\nI0506 23:17:55.048309 2324 log.go:172] (0xc0006c66e0) (0xc000724280) Stream removed, broadcasting: 1\nI0506 23:17:55.048353 2324 log.go:172] (0xc0006c66e0) (0xc0007279a0) Stream removed, broadcasting: 3\nI0506 23:17:55.048380 2324 log.go:172] (0xc0006c66e0) (0xc0005b7ae0) Stream removed, broadcasting: 5\n" May 6 23:17:55.051: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 23:17:55.051: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 23:17:55.051: INFO: Waiting for statefulset status.replicas updated to 0 May 6 23:17:55.054: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 May 6 23:18:05.063: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 23:18:05.063: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 6 23:18:05.063: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 6 23:18:05.109: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999778s May 6 23:18:06.113: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.960883277s May 6 23:18:07.117: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.957062276s May 6 23:18:08.121: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.953646082s May 6 23:18:09.126: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.949046682s May 6 23:18:10.130: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.944558566s May 6 23:18:11.134: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.940518269s May 6 23:18:12.139: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.936004717s May 6 23:18:13.143: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.931393076s May 6 23:18:14.153: INFO: Verifying statefulset ss doesn't scale past 3 for another 927.174288ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8859 May 6 23:18:15.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8859 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:18:15.377: INFO: stderr: "I0506 23:18:15.299881 2344 log.go:172] (0xc0003c0dc0) (0xc000986000) Create stream\nI0506 23:18:15.299928 2344 log.go:172] (0xc0003c0dc0) (0xc000986000) Stream added, broadcasting: 1\nI0506 23:18:15.302212 2344 log.go:172] (0xc0003c0dc0) Reply frame received for 1\nI0506 23:18:15.302238 2344 log.go:172] (0xc0003c0dc0) (0xc000962000) Create stream\nI0506 23:18:15.302245 2344 log.go:172] (0xc0003c0dc0) (0xc000962000) Stream added, broadcasting: 3\nI0506 23:18:15.303096 2344 log.go:172] (0xc0003c0dc0) Reply frame received for 3\nI0506 23:18:15.303125 2344 log.go:172] (0xc0003c0dc0) (0xc0009620a0) Create stream\nI0506 23:18:15.303134 2344 log.go:172] (0xc0003c0dc0) (0xc0009620a0) Stream added, broadcasting: 5\nI0506 23:18:15.303864 2344 log.go:172] (0xc0003c0dc0) Reply frame received for 5\nI0506 23:18:15.371305 2344 log.go:172] (0xc0003c0dc0) Data frame received for 5\nI0506 23:18:15.371342 2344 log.go:172] (0xc0009620a0) (5) Data frame handling\nI0506 23:18:15.371354 2344 log.go:172] (0xc0009620a0) (5) Data frame sent\nI0506 23:18:15.371364 2344 log.go:172] (0xc0003c0dc0) Data frame received for 5\nI0506 23:18:15.371373 2344 log.go:172] (0xc0009620a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 23:18:15.371395 2344 log.go:172] (0xc0003c0dc0) Data frame received for 3\nI0506 23:18:15.371407 2344 log.go:172] (0xc000962000) (3) Data frame handling\nI0506 23:18:15.371428 2344 log.go:172] (0xc000962000) (3) Data frame sent\nI0506 23:18:15.371455 2344 log.go:172] (0xc0003c0dc0) Data frame received for 3\nI0506 23:18:15.371470 2344 log.go:172] (0xc000962000) (3) Data frame handling\nI0506 23:18:15.372549 2344 log.go:172] (0xc0003c0dc0) Data frame received for 1\nI0506 23:18:15.372573 2344 log.go:172] (0xc000986000) (1) Data frame handling\nI0506 23:18:15.372595 2344 log.go:172] (0xc000986000) (1) Data frame sent\nI0506 23:18:15.372614 2344 log.go:172] (0xc0003c0dc0) (0xc000986000) Stream removed, broadcasting: 1\nI0506 23:18:15.372725 2344 log.go:172] (0xc0003c0dc0) Go away received\nI0506 23:18:15.372970 2344 log.go:172] (0xc0003c0dc0) (0xc000986000) Stream removed, broadcasting: 1\nI0506 23:18:15.372990 2344 log.go:172] (0xc0003c0dc0) (0xc000962000) Stream removed, broadcasting: 3\nI0506 23:18:15.373000 2344 log.go:172] (0xc0003c0dc0) (0xc0009620a0) Stream removed, broadcasting: 5\n" May 6 23:18:15.377: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 23:18:15.377: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 23:18:15.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8859 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:18:15.573: INFO: stderr: "I0506 23:18:15.507260 2367 log.go:172] (0xc0003c0630) (0xc00096c0a0) Create stream\nI0506 23:18:15.507346 2367 log.go:172] (0xc0003c0630) (0xc00096c0a0) Stream added, broadcasting: 1\nI0506 23:18:15.509859 2367 log.go:172] (0xc0003c0630) Reply frame received for 1\nI0506 23:18:15.509908 2367 log.go:172] (0xc0003c0630) (0xc0005fdae0) Create stream\nI0506 23:18:15.509934 2367 log.go:172] (0xc0003c0630) (0xc0005fdae0) Stream added, broadcasting: 3\nI0506 23:18:15.510852 2367 log.go:172] (0xc0003c0630) Reply frame received for 3\nI0506 23:18:15.510888 2367 log.go:172] (0xc0003c0630) (0xc0004dc6e0) Create stream\nI0506 23:18:15.510900 2367 log.go:172] (0xc0003c0630) (0xc0004dc6e0) Stream added, broadcasting: 5\nI0506 23:18:15.511649 2367 log.go:172] (0xc0003c0630) Reply frame received for 5\nI0506 23:18:15.567011 2367 log.go:172] (0xc0003c0630) Data frame received for 5\nI0506 23:18:15.567045 2367 log.go:172] (0xc0004dc6e0) (5) Data frame handling\nI0506 23:18:15.567060 2367 log.go:172] (0xc0004dc6e0) (5) Data frame sent\nI0506 23:18:15.567073 2367 log.go:172] (0xc0003c0630) Data frame received for 5\nI0506 23:18:15.567082 2367 log.go:172] (0xc0004dc6e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 23:18:15.567120 2367 log.go:172] (0xc0003c0630) Data frame received for 3\nI0506 23:18:15.567130 2367 log.go:172] (0xc0005fdae0) (3) Data frame handling\nI0506 23:18:15.567154 2367 log.go:172] (0xc0005fdae0) (3) Data frame sent\nI0506 23:18:15.567227 2367 log.go:172] (0xc0003c0630) Data frame received for 3\nI0506 23:18:15.567243 2367 log.go:172] (0xc0005fdae0) (3) Data frame handling\nI0506 23:18:15.568453 2367 log.go:172] (0xc0003c0630) Data frame received for 1\nI0506 23:18:15.568476 2367 log.go:172] (0xc00096c0a0) (1) Data frame handling\nI0506 23:18:15.568500 2367 log.go:172] (0xc00096c0a0) (1) Data frame sent\nI0506 23:18:15.568514 2367 log.go:172] (0xc0003c0630) (0xc00096c0a0) Stream removed, broadcasting: 1\nI0506 23:18:15.568588 2367 log.go:172] (0xc0003c0630) Go away received\nI0506 23:18:15.568832 2367 log.go:172] (0xc0003c0630) (0xc00096c0a0) Stream removed, broadcasting: 1\nI0506 23:18:15.568856 2367 log.go:172] (0xc0003c0630) (0xc0005fdae0) Stream removed, broadcasting: 3\nI0506 23:18:15.568871 2367 log.go:172] (0xc0003c0630) (0xc0004dc6e0) Stream removed, broadcasting: 5\n" May 6 23:18:15.573: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 23:18:15.573: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 23:18:15.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8859 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:18:15.751: INFO: stderr: "I0506 23:18:15.686769 2387 log.go:172] (0xc0004fe2c0) (0xc0006fd4a0) Create stream\nI0506 23:18:15.686904 2387 log.go:172] (0xc0004fe2c0) (0xc0006fd4a0) Stream added, broadcasting: 1\nI0506 23:18:15.688736 2387 log.go:172] (0xc0004fe2c0) Reply frame received for 1\nI0506 23:18:15.688766 2387 log.go:172] (0xc0004fe2c0) (0xc0008d6000) Create stream\nI0506 23:18:15.688778 2387 log.go:172] (0xc0004fe2c0) (0xc0008d6000) Stream added, broadcasting: 3\nI0506 23:18:15.689817 2387 log.go:172] (0xc0004fe2c0) Reply frame received for 3\nI0506 23:18:15.689838 2387 log.go:172] (0xc0004fe2c0) (0xc0008d60a0) Create stream\nI0506 23:18:15.689844 2387 log.go:172] (0xc0004fe2c0) (0xc0008d60a0) Stream added, broadcasting: 5\nI0506 23:18:15.690620 2387 log.go:172] (0xc0004fe2c0) Reply frame received for 5\nI0506 23:18:15.745525 2387 log.go:172] (0xc0004fe2c0) Data frame received for 5\nI0506 23:18:15.745549 2387 log.go:172] (0xc0008d60a0) (5) Data frame handling\nI0506 23:18:15.745558 2387 log.go:172] (0xc0008d60a0) (5) Data frame sent\nI0506 23:18:15.745564 2387 log.go:172] (0xc0004fe2c0) Data frame received for 5\nI0506 23:18:15.745570 2387 log.go:172] (0xc0008d60a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 23:18:15.745593 2387 log.go:172] (0xc0004fe2c0) Data frame received for 3\nI0506 23:18:15.745617 2387 log.go:172] (0xc0008d6000) (3) Data frame handling\nI0506 23:18:15.745640 2387 log.go:172] (0xc0008d6000) (3) Data frame sent\nI0506 23:18:15.745650 2387 log.go:172] (0xc0004fe2c0) Data frame received for 3\nI0506 23:18:15.745658 2387 log.go:172] (0xc0008d6000) (3) Data frame handling\nI0506 23:18:15.746862 2387 log.go:172] (0xc0004fe2c0) Data frame received for 1\nI0506 23:18:15.746937 2387 log.go:172] (0xc0006fd4a0) (1) Data frame handling\nI0506 23:18:15.746962 2387 log.go:172] (0xc0006fd4a0) (1) Data frame sent\nI0506 23:18:15.747054 2387 log.go:172] (0xc0004fe2c0) (0xc0006fd4a0) Stream removed, broadcasting: 1\nI0506 23:18:15.747081 2387 log.go:172] (0xc0004fe2c0) Go away received\nI0506 23:18:15.747330 2387 log.go:172] (0xc0004fe2c0) (0xc0006fd4a0) Stream removed, broadcasting: 1\nI0506 23:18:15.747343 2387 log.go:172] (0xc0004fe2c0) (0xc0008d6000) Stream removed, broadcasting: 3\nI0506 23:18:15.747349 2387 log.go:172] (0xc0004fe2c0) (0xc0008d60a0) Stream removed, broadcasting: 5\n" May 6 23:18:15.751: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 23:18:15.751: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 23:18:15.751: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 6 23:18:45.764: INFO: Deleting all statefulset in ns statefulset-8859 May 6 23:18:45.767: INFO: Scaling statefulset ss to 0 May 6 23:18:45.776: INFO: Waiting for statefulset status.replicas updated to 0 May 6 23:18:45.779: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:18:45.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8859" for this suite. • [SLOW TEST:116.875 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":106,"skipped":1589,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:18:45.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:18:45.894: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 6 23:18:47.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7939 create -f -' May 6 23:18:48.441: INFO: stderr: "" May 6 23:18:48.441: INFO: stdout: "e2e-test-crd-publish-openapi-3413-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 6 23:18:48.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7939 delete e2e-test-crd-publish-openapi-3413-crds test-cr' May 6 23:18:48.578: INFO: stderr: "" May 6 23:18:48.578: INFO: stdout: "e2e-test-crd-publish-openapi-3413-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 6 23:18:48.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7939 apply -f -' May 6 23:18:48.871: INFO: stderr: "" May 6 23:18:48.871: INFO: stdout: "e2e-test-crd-publish-openapi-3413-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 6 23:18:48.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7939 delete e2e-test-crd-publish-openapi-3413-crds test-cr' May 6 23:18:48.972: INFO: stderr: "" May 6 23:18:48.972: INFO: stdout: "e2e-test-crd-publish-openapi-3413-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 6 23:18:48.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3413-crds' May 6 23:18:49.235: INFO: stderr: "" May 6 23:18:49.235: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3413-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:18:52.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7939" for this suite. • [SLOW TEST:6.322 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":107,"skipped":1619,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:18:52.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:18:52.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-9661" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":108,"skipped":1626,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:18:52.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-406badde-cdad-43be-a3aa-075514168016 STEP: Creating a pod to test consume configMaps May 6 23:18:52.371: INFO: Waiting up to 5m0s for pod "pod-configmaps-2a8a7fb1-57ef-4695-bba6-b4eb2ff02024" in namespace "configmap-4613" to be "success or failure" May 6 23:18:52.387: INFO: Pod "pod-configmaps-2a8a7fb1-57ef-4695-bba6-b4eb2ff02024": Phase="Pending", Reason="", readiness=false. Elapsed: 16.177161ms May 6 23:18:54.392: INFO: Pod "pod-configmaps-2a8a7fb1-57ef-4695-bba6-b4eb2ff02024": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020906113s May 6 23:18:56.415: INFO: Pod "pod-configmaps-2a8a7fb1-57ef-4695-bba6-b4eb2ff02024": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044505634s May 6 23:18:58.419: INFO: Pod "pod-configmaps-2a8a7fb1-57ef-4695-bba6-b4eb2ff02024": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048362052s STEP: Saw pod success May 6 23:18:58.419: INFO: Pod "pod-configmaps-2a8a7fb1-57ef-4695-bba6-b4eb2ff02024" satisfied condition "success or failure" May 6 23:18:58.422: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-2a8a7fb1-57ef-4695-bba6-b4eb2ff02024 container configmap-volume-test: STEP: delete the pod May 6 23:18:58.484: INFO: Waiting for pod pod-configmaps-2a8a7fb1-57ef-4695-bba6-b4eb2ff02024 to disappear May 6 23:18:58.561: INFO: Pod pod-configmaps-2a8a7fb1-57ef-4695-bba6-b4eb2ff02024 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:18:58.561: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4613" for this suite. • [SLOW TEST:6.330 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1629,"failed":0} S ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:18:58.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 6 23:19:07.654: INFO: Successfully updated pod "labelsupdate93d8746b-a746-468a-a172-65eb58996542" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:19:09.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2466" for this suite. • [SLOW TEST:11.384 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1630,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:19:09.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0506 23:19:52.106787 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 6 23:19:52.106: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:19:52.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5945" for this suite. • [SLOW TEST:42.487 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":111,"skipped":1632,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:19:52.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 6 23:19:53.230: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 23:19:53.808: INFO: Waiting for terminating namespaces to be deleted... May 6 23:19:53.810: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 6 23:19:53.817: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 23:19:53.817: INFO: Container kindnet-cni ready: true, restart count 0 May 6 23:19:53.817: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 23:19:53.817: INFO: Container kube-proxy ready: true, restart count 0 May 6 23:19:53.817: INFO: simpletest.rc-c5xqz from gc-5945 started at 2020-05-06 23:19:10 +0000 UTC (1 container statuses recorded) May 6 23:19:53.817: INFO: Container nginx ready: true, restart count 0 May 6 23:19:53.817: INFO: simpletest.rc-sdslp from gc-5945 started at 2020-05-06 23:19:10 +0000 UTC (1 container statuses recorded) May 6 23:19:53.817: INFO: Container nginx ready: true, restart count 0 May 6 23:19:53.817: INFO: simpletest.rc-nnqmc from gc-5945 started at 2020-05-06 23:19:10 +0000 UTC (1 container statuses recorded) May 6 23:19:53.817: INFO: Container nginx ready: true, restart count 0 May 6 23:19:53.817: INFO: simpletest.rc-jl6gp from gc-5945 started at 2020-05-06 23:19:10 +0000 UTC (1 container statuses recorded) May 6 23:19:53.817: INFO: Container nginx ready: true, restart count 0 May 6 23:19:53.817: INFO: simpletest.rc-vnscm from gc-5945 started at 2020-05-06 23:19:10 +0000 UTC (1 container statuses recorded) May 6 23:19:53.817: INFO: Container nginx ready: true, restart count 0 May 6 23:19:53.817: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 6 23:19:53.834: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 6 23:19:53.834: INFO: Container kube-hunter ready: false, restart count 0 May 6 23:19:53.834: INFO: simpletest.rc-5wfzm from gc-5945 started at 2020-05-06 23:19:10 +0000 UTC (1 container statuses recorded) May 6 23:19:53.834: INFO: Container nginx ready: true, restart count 0 May 6 23:19:53.834: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 23:19:53.834: INFO: Container kindnet-cni ready: true, restart count 0 May 6 23:19:53.834: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 6 23:19:53.834: INFO: Container kube-bench ready: false, restart count 0 May 6 23:19:53.834: INFO: simpletest.rc-w5d88 from gc-5945 started at 2020-05-06 23:19:10 +0000 UTC (1 container statuses recorded) May 6 23:19:53.834: INFO: Container nginx ready: true, restart count 0 May 6 23:19:53.834: INFO: simpletest.rc-4gz9x from gc-5945 started at 2020-05-06 23:19:10 +0000 UTC (1 container statuses recorded) May 6 23:19:53.834: INFO: Container nginx ready: true, restart count 0 May 6 23:19:53.834: INFO: simpletest.rc-vbflq from gc-5945 started at 2020-05-06 23:19:10 +0000 UTC (1 container statuses recorded) May 6 23:19:53.834: INFO: Container nginx ready: true, restart count 0 May 6 23:19:53.834: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 23:19:53.834: INFO: Container kube-proxy ready: true, restart count 0 May 6 23:19:53.834: INFO: simpletest.rc-ghhk6 from gc-5945 started at 2020-05-06 23:19:10 +0000 UTC (1 container statuses recorded) May 6 23:19:53.834: INFO: Container nginx ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-5f3a6239-5678-4315-98d2-535cd1ee6f99 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-5f3a6239-5678-4315-98d2-535cd1ee6f99 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-5f3a6239-5678-4315-98d2-535cd1ee6f99 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:25:16.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3548" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:324.381 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":112,"skipped":1649,"failed":0} SS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:25:16.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-6049 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6049 to expose endpoints map[] May 6 23:25:16.962: INFO: Get endpoints failed (32.59694ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 6 23:25:17.966: INFO: successfully validated that service endpoint-test2 in namespace services-6049 exposes endpoints map[] (1.036690269s elapsed) STEP: Creating pod pod1 in namespace services-6049 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6049 to expose endpoints map[pod1:[80]] May 6 23:25:21.031: INFO: successfully validated that service endpoint-test2 in namespace services-6049 exposes endpoints map[pod1:[80]] (3.056016539s elapsed) STEP: Creating pod pod2 in namespace services-6049 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6049 to expose endpoints map[pod1:[80] pod2:[80]] May 6 23:25:25.220: INFO: Unexpected endpoints: found map[10a682db-deaa-4637-88e8-0885049706ad:[80]], expected map[pod1:[80] pod2:[80]] (4.185649107s elapsed, will retry) May 6 23:25:26.231: INFO: successfully validated that service endpoint-test2 in namespace services-6049 exposes endpoints map[pod1:[80] pod2:[80]] (5.196454135s elapsed) STEP: Deleting pod pod1 in namespace services-6049 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6049 to expose endpoints map[pod2:[80]] May 6 23:25:27.314: INFO: successfully validated that service endpoint-test2 in namespace services-6049 exposes endpoints map[pod2:[80]] (1.079893188s elapsed) STEP: Deleting pod pod2 in namespace services-6049 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6049 to expose endpoints map[] May 6 23:25:28.330: INFO: successfully validated that service endpoint-test2 in namespace services-6049 exposes endpoints map[] (1.011788343s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:25:28.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6049" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.650 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":113,"skipped":1651,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:25:28.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 6 23:25:28.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-2156' May 6 23:25:28.702: INFO: stderr: "" May 6 23:25:28.702: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 6 23:25:33.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-2156 -o json' May 6 23:25:33.847: INFO: stderr: "" May 6 23:25:33.847: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-06T23:25:28Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-2156\",\n \"resourceVersion\": \"14027555\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2156/pods/e2e-test-httpd-pod\",\n \"uid\": \"000032d4-8cbf-45b7-87e9-fb8b00a69f76\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-kbcsv\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-kbcsv\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-kbcsv\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T23:25:28Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T23:25:31Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T23:25:31Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-06T23:25:28Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://d82c3fe43d0c44107dff6383fff549390c21180d5e906aefeef8ba0f4528e2a0\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-06T23:25:31Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.24\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.24\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-06T23:25:28Z\"\n }\n}\n" STEP: replace the image in the pod May 6 23:25:33.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2156' May 6 23:25:34.277: INFO: stderr: "" May 6 23:25:34.277: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 May 6 23:25:34.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2156' May 6 23:25:37.893: INFO: stderr: "" May 6 23:25:37.893: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:25:37.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2156" for this suite. • [SLOW TEST:9.429 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":114,"skipped":1690,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:25:37.900: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 6 23:25:41.143: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:25:41.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8916" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1761,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:25:41.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 23:25:41.331: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e7cd493a-2d01-4bce-89e8-9c47b52f21e6" in namespace "downward-api-3181" to be "success or failure" May 6 23:25:41.352: INFO: Pod "downwardapi-volume-e7cd493a-2d01-4bce-89e8-9c47b52f21e6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.506614ms May 6 23:25:43.532: INFO: Pod "downwardapi-volume-e7cd493a-2d01-4bce-89e8-9c47b52f21e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2008265s May 6 23:25:45.536: INFO: Pod "downwardapi-volume-e7cd493a-2d01-4bce-89e8-9c47b52f21e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.20444564s STEP: Saw pod success May 6 23:25:45.536: INFO: Pod "downwardapi-volume-e7cd493a-2d01-4bce-89e8-9c47b52f21e6" satisfied condition "success or failure" May 6 23:25:45.538: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-e7cd493a-2d01-4bce-89e8-9c47b52f21e6 container client-container: STEP: delete the pod May 6 23:25:45.592: INFO: Waiting for pod downwardapi-volume-e7cd493a-2d01-4bce-89e8-9c47b52f21e6 to disappear May 6 23:25:45.626: INFO: Pod downwardapi-volume-e7cd493a-2d01-4bce-89e8-9c47b52f21e6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:25:45.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3181" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1771,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:25:45.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-bb4ea079-28f0-4744-b046-de927e512e99 STEP: Creating a pod to test consume configMaps May 6 23:25:45.762: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e273b1b5-ed7a-4d16-946d-27d154a4a930" in namespace "projected-622" to be "success or failure" May 6 23:25:45.765: INFO: Pod "pod-projected-configmaps-e273b1b5-ed7a-4d16-946d-27d154a4a930": Phase="Pending", Reason="", readiness=false. Elapsed: 3.244583ms May 6 23:25:47.770: INFO: Pod "pod-projected-configmaps-e273b1b5-ed7a-4d16-946d-27d154a4a930": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007752528s May 6 23:25:49.774: INFO: Pod "pod-projected-configmaps-e273b1b5-ed7a-4d16-946d-27d154a4a930": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012178722s STEP: Saw pod success May 6 23:25:49.774: INFO: Pod "pod-projected-configmaps-e273b1b5-ed7a-4d16-946d-27d154a4a930" satisfied condition "success or failure" May 6 23:25:49.777: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-e273b1b5-ed7a-4d16-946d-27d154a4a930 container projected-configmap-volume-test: STEP: delete the pod May 6 23:25:49.831: INFO: Waiting for pod pod-projected-configmaps-e273b1b5-ed7a-4d16-946d-27d154a4a930 to disappear May 6 23:25:49.849: INFO: Pod pod-projected-configmaps-e273b1b5-ed7a-4d16-946d-27d154a4a930 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:25:49.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-622" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1772,"failed":0} S ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:25:49.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy May 6 23:25:49.960: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix291938683/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:25:50.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-964" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":118,"skipped":1773,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:25:50.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin May 6 23:25:50.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3577 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 6 23:25:53.026: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0506 23:25:52.920488 2619 log.go:172] (0xc000b0b1e0) (0xc000ad8320) Create stream\nI0506 23:25:52.920541 2619 log.go:172] (0xc000b0b1e0) (0xc000ad8320) Stream added, broadcasting: 1\nI0506 23:25:52.922962 2619 log.go:172] (0xc000b0b1e0) Reply frame received for 1\nI0506 23:25:52.923018 2619 log.go:172] (0xc000b0b1e0) (0xc000ad83c0) Create stream\nI0506 23:25:52.923033 2619 log.go:172] (0xc000b0b1e0) (0xc000ad83c0) Stream added, broadcasting: 3\nI0506 23:25:52.923925 2619 log.go:172] (0xc000b0b1e0) Reply frame received for 3\nI0506 23:25:52.923963 2619 log.go:172] (0xc000b0b1e0) (0xc000681b80) Create stream\nI0506 23:25:52.923973 2619 log.go:172] (0xc000b0b1e0) (0xc000681b80) Stream added, broadcasting: 5\nI0506 23:25:52.924883 2619 log.go:172] (0xc000b0b1e0) Reply frame received for 5\nI0506 23:25:52.924930 2619 log.go:172] (0xc000b0b1e0) (0xc00079e000) Create stream\nI0506 23:25:52.924946 2619 log.go:172] (0xc000b0b1e0) (0xc00079e000) Stream added, broadcasting: 7\nI0506 23:25:52.925982 2619 log.go:172] (0xc000b0b1e0) Reply frame received for 7\nI0506 23:25:52.926131 2619 log.go:172] (0xc000ad83c0) (3) Writing data frame\nI0506 23:25:52.926211 2619 log.go:172] (0xc000ad83c0) (3) Writing data frame\nI0506 23:25:52.927033 2619 log.go:172] (0xc000b0b1e0) Data frame received for 5\nI0506 23:25:52.927055 2619 log.go:172] (0xc000681b80) (5) Data frame handling\nI0506 23:25:52.927076 2619 log.go:172] (0xc000681b80) (5) Data frame sent\nI0506 23:25:52.927655 2619 log.go:172] (0xc000b0b1e0) Data frame received for 5\nI0506 23:25:52.927681 2619 log.go:172] (0xc000681b80) (5) Data frame handling\nI0506 23:25:52.927706 2619 log.go:172] (0xc000681b80) (5) Data frame sent\nI0506 23:25:52.967889 2619 log.go:172] (0xc000b0b1e0) Data frame received for 5\nI0506 23:25:52.967945 2619 log.go:172] (0xc000681b80) (5) Data frame handling\nI0506 23:25:52.967971 2619 log.go:172] (0xc000b0b1e0) Data frame received for 7\nI0506 23:25:52.967987 2619 log.go:172] (0xc00079e000) (7) Data frame handling\nI0506 23:25:52.968850 2619 log.go:172] (0xc000b0b1e0) Data frame received for 1\nI0506 23:25:52.968875 2619 log.go:172] (0xc000b0b1e0) (0xc000ad83c0) Stream removed, broadcasting: 3\nI0506 23:25:52.968896 2619 log.go:172] (0xc000ad8320) (1) Data frame handling\nI0506 23:25:52.968906 2619 log.go:172] (0xc000ad8320) (1) Data frame sent\nI0506 23:25:52.968915 2619 log.go:172] (0xc000b0b1e0) (0xc000ad8320) Stream removed, broadcasting: 1\nI0506 23:25:52.968931 2619 log.go:172] (0xc000b0b1e0) Go away received\nI0506 23:25:52.969611 2619 log.go:172] (0xc000b0b1e0) (0xc000ad8320) Stream removed, broadcasting: 1\nI0506 23:25:52.969634 2619 log.go:172] (0xc000b0b1e0) (0xc000ad83c0) Stream removed, broadcasting: 3\nI0506 23:25:52.969649 2619 log.go:172] (0xc000b0b1e0) (0xc000681b80) Stream removed, broadcasting: 5\nI0506 23:25:52.969685 2619 log.go:172] (0xc000b0b1e0) (0xc00079e000) Stream removed, broadcasting: 7\n" May 6 23:25:53.026: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:25:55.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3577" for this suite. • [SLOW TEST:5.107 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":119,"skipped":1823,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:25:55.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:25:55.272: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:25:55.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7941" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":120,"skipped":1843,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:25:55.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 23:25:55.936: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f119909c-0b1e-4c1e-beb6-cbdaf710dbc7" in namespace "projected-223" to be "success or failure" May 6 23:25:55.939: INFO: Pod "downwardapi-volume-f119909c-0b1e-4c1e-beb6-cbdaf710dbc7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.177069ms May 6 23:25:57.943: INFO: Pod "downwardapi-volume-f119909c-0b1e-4c1e-beb6-cbdaf710dbc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007181791s May 6 23:25:59.947: INFO: Pod "downwardapi-volume-f119909c-0b1e-4c1e-beb6-cbdaf710dbc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011618223s STEP: Saw pod success May 6 23:25:59.947: INFO: Pod "downwardapi-volume-f119909c-0b1e-4c1e-beb6-cbdaf710dbc7" satisfied condition "success or failure" May 6 23:25:59.950: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f119909c-0b1e-4c1e-beb6-cbdaf710dbc7 container client-container: STEP: delete the pod May 6 23:26:00.118: INFO: Waiting for pod downwardapi-volume-f119909c-0b1e-4c1e-beb6-cbdaf710dbc7 to disappear May 6 23:26:00.155: INFO: Pod downwardapi-volume-f119909c-0b1e-4c1e-beb6-cbdaf710dbc7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:26:00.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-223" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1845,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:26:00.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:26:17.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6174" for this suite. • [SLOW TEST:17.202 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":122,"skipped":1849,"failed":0} [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:26:17.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 6 23:26:22.695: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:26:23.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7279" for this suite. • [SLOW TEST:6.435 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":123,"skipped":1849,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:26:23.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 23:26:25.082: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 23:26:27.104: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404385, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404385, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404385, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404384, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 23:26:30.171: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:26:30.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4333" for this suite. STEP: Destroying namespace "webhook-4333-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.835 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":124,"skipped":1865,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:26:30.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:26:44.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1328" for this suite. • [SLOW TEST:13.703 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":125,"skipped":1884,"failed":0} S ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:26:44.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 6 23:26:51.024: INFO: Successfully updated pod "adopt-release-6dcg8" STEP: Checking that the Job readopts the Pod May 6 23:26:51.024: INFO: Waiting up to 15m0s for pod "adopt-release-6dcg8" in namespace "job-3520" to be "adopted" May 6 23:26:51.086: INFO: Pod "adopt-release-6dcg8": Phase="Running", Reason="", readiness=true. Elapsed: 61.909227ms May 6 23:26:53.090: INFO: Pod "adopt-release-6dcg8": Phase="Running", Reason="", readiness=true. Elapsed: 2.065823811s May 6 23:26:53.090: INFO: Pod "adopt-release-6dcg8" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 6 23:26:53.599: INFO: Successfully updated pod "adopt-release-6dcg8" STEP: Checking that the Job releases the Pod May 6 23:26:53.599: INFO: Waiting up to 15m0s for pod "adopt-release-6dcg8" in namespace "job-3520" to be "released" May 6 23:26:53.618: INFO: Pod "adopt-release-6dcg8": Phase="Running", Reason="", readiness=true. Elapsed: 19.603371ms May 6 23:26:55.655: INFO: Pod "adopt-release-6dcg8": Phase="Running", Reason="", readiness=true. Elapsed: 2.056385657s May 6 23:26:55.655: INFO: Pod "adopt-release-6dcg8" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:26:55.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3520" for this suite. • [SLOW TEST:11.315 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":126,"skipped":1885,"failed":0} SSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:26:55.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-7726 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7726 to expose endpoints map[] May 6 23:26:56.059: INFO: Get endpoints failed (14.339617ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 6 23:26:57.068: INFO: successfully validated that service multi-endpoint-test in namespace services-7726 exposes endpoints map[] (1.023692667s elapsed) STEP: Creating pod pod1 in namespace services-7726 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7726 to expose endpoints map[pod1:[100]] May 6 23:27:01.170: INFO: successfully validated that service multi-endpoint-test in namespace services-7726 exposes endpoints map[pod1:[100]] (4.094292526s elapsed) STEP: Creating pod pod2 in namespace services-7726 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7726 to expose endpoints map[pod1:[100] pod2:[101]] May 6 23:27:04.330: INFO: successfully validated that service multi-endpoint-test in namespace services-7726 exposes endpoints map[pod1:[100] pod2:[101]] (3.156566616s elapsed) STEP: Deleting pod pod1 in namespace services-7726 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7726 to expose endpoints map[pod2:[101]] May 6 23:27:05.366: INFO: successfully validated that service multi-endpoint-test in namespace services-7726 exposes endpoints map[pod2:[101]] (1.031526835s elapsed) STEP: Deleting pod pod2 in namespace services-7726 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7726 to expose endpoints map[] May 6 23:27:06.427: INFO: successfully validated that service multi-endpoint-test in namespace services-7726 exposes endpoints map[] (1.057930154s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:27:06.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7726" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.869 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":127,"skipped":1890,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:27:06.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all May 6 23:27:06.586: INFO: Waiting up to 5m0s for pod "client-containers-cd9dcabb-d015-45c2-b244-765fd3642c5f" in namespace "containers-625" to be "success or failure" May 6 23:27:06.594: INFO: Pod "client-containers-cd9dcabb-d015-45c2-b244-765fd3642c5f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.203679ms May 6 23:27:08.658: INFO: Pod "client-containers-cd9dcabb-d015-45c2-b244-765fd3642c5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072529851s May 6 23:27:10.686: INFO: Pod "client-containers-cd9dcabb-d015-45c2-b244-765fd3642c5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100167148s May 6 23:27:12.690: INFO: Pod "client-containers-cd9dcabb-d015-45c2-b244-765fd3642c5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.104247309s STEP: Saw pod success May 6 23:27:12.690: INFO: Pod "client-containers-cd9dcabb-d015-45c2-b244-765fd3642c5f" satisfied condition "success or failure" May 6 23:27:12.693: INFO: Trying to get logs from node jerma-worker pod client-containers-cd9dcabb-d015-45c2-b244-765fd3642c5f container test-container: STEP: delete the pod May 6 23:27:12.752: INFO: Waiting for pod client-containers-cd9dcabb-d015-45c2-b244-765fd3642c5f to disappear May 6 23:27:12.784: INFO: Pod client-containers-cd9dcabb-d015-45c2-b244-765fd3642c5f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:27:12.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-625" for this suite. • [SLOW TEST:6.322 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":1912,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:27:12.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-2w7m STEP: Creating a pod to test atomic-volume-subpath May 6 23:27:13.296: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-2w7m" in namespace "subpath-2799" to be "success or failure" May 6 23:27:13.323: INFO: Pod "pod-subpath-test-configmap-2w7m": Phase="Pending", Reason="", readiness=false. Elapsed: 26.349308ms May 6 23:27:15.422: INFO: Pod "pod-subpath-test-configmap-2w7m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12594417s May 6 23:27:17.426: INFO: Pod "pod-subpath-test-configmap-2w7m": Phase="Running", Reason="", readiness=true. Elapsed: 4.129807415s May 6 23:27:19.430: INFO: Pod "pod-subpath-test-configmap-2w7m": Phase="Running", Reason="", readiness=true. Elapsed: 6.133468823s May 6 23:27:21.434: INFO: Pod "pod-subpath-test-configmap-2w7m": Phase="Running", Reason="", readiness=true. Elapsed: 8.137941527s May 6 23:27:23.512: INFO: Pod "pod-subpath-test-configmap-2w7m": Phase="Running", Reason="", readiness=true. Elapsed: 10.215413538s May 6 23:27:25.517: INFO: Pod "pod-subpath-test-configmap-2w7m": Phase="Running", Reason="", readiness=true. Elapsed: 12.220483452s May 6 23:27:27.520: INFO: Pod "pod-subpath-test-configmap-2w7m": Phase="Running", Reason="", readiness=true. Elapsed: 14.22378278s May 6 23:27:29.525: INFO: Pod "pod-subpath-test-configmap-2w7m": Phase="Running", Reason="", readiness=true. Elapsed: 16.228045905s May 6 23:27:31.542: INFO: Pod "pod-subpath-test-configmap-2w7m": Phase="Running", Reason="", readiness=true. Elapsed: 18.245671078s May 6 23:27:33.547: INFO: Pod "pod-subpath-test-configmap-2w7m": Phase="Running", Reason="", readiness=true. Elapsed: 20.250125819s May 6 23:27:35.550: INFO: Pod "pod-subpath-test-configmap-2w7m": Phase="Running", Reason="", readiness=true. Elapsed: 22.253907399s May 6 23:27:37.554: INFO: Pod "pod-subpath-test-configmap-2w7m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.257872897s STEP: Saw pod success May 6 23:27:37.554: INFO: Pod "pod-subpath-test-configmap-2w7m" satisfied condition "success or failure" May 6 23:27:37.557: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-2w7m container test-container-subpath-configmap-2w7m: STEP: delete the pod May 6 23:27:37.646: INFO: Waiting for pod pod-subpath-test-configmap-2w7m to disappear May 6 23:27:37.655: INFO: Pod pod-subpath-test-configmap-2w7m no longer exists STEP: Deleting pod pod-subpath-test-configmap-2w7m May 6 23:27:37.655: INFO: Deleting pod "pod-subpath-test-configmap-2w7m" in namespace "subpath-2799" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:27:37.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2799" for this suite. • [SLOW TEST:24.799 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":129,"skipped":1995,"failed":0} [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:27:37.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:27:37.717: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 6 23:27:37.763: INFO: Pod name sample-pod: Found 0 pods out of 1 May 6 23:27:42.766: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 6 23:27:42.767: INFO: Creating deployment "test-rolling-update-deployment" May 6 23:27:42.769: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 6 23:27:42.778: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 6 23:27:44.786: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 6 23:27:44.789: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404462, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404462, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404462, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404462, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:27:46.850: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 6 23:27:46.869: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-9199 /apis/apps/v1/namespaces/deployment-9199/deployments/test-rolling-update-deployment 7ceb38f1-f0c0-4345-bd6d-0706a81e3a53 14028513 1 2020-05-06 23:27:42 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003db9bc8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-06 23:27:42 +0000 UTC,LastTransitionTime:2020-05-06 23:27:42 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-05-06 23:27:46 +0000 UTC,LastTransitionTime:2020-05-06 23:27:42 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 6 23:27:46.872: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-9199 /apis/apps/v1/namespaces/deployment-9199/replicasets/test-rolling-update-deployment-67cf4f6444 de76e899-261b-468c-8c22-a9e86aa10271 14028502 1 2020-05-06 23:27:42 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 7ceb38f1-f0c0-4345-bd6d-0706a81e3a53 0xc003ce08f7 0xc003ce08f8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003ce0968 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 6 23:27:46.872: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 6 23:27:46.872: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-9199 /apis/apps/v1/namespaces/deployment-9199/replicasets/test-rolling-update-controller eab1030c-b4ac-49d6-8e06-f3e28f0e15e9 14028511 2 2020-05-06 23:27:37 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 7ceb38f1-f0c0-4345-bd6d-0706a81e3a53 0xc003ce080f 0xc003ce0820}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003ce0888 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 23:27:46.875: INFO: Pod "test-rolling-update-deployment-67cf4f6444-65m2p" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-65m2p test-rolling-update-deployment-67cf4f6444- deployment-9199 /api/v1/namespaces/deployment-9199/pods/test-rolling-update-deployment-67cf4f6444-65m2p ef318bff-8328-4749-ab08-34bf582e8664 14028501 0 2020-05-06 23:27:42 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 de76e899-261b-468c-8c22-a9e86aa10271 0xc003ce0db7 0xc003ce0db8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5z7kb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5z7kb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5z7kb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:27:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:27:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:27:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:27:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.33,StartTime:2020-05-06 23:27:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 23:27:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://1755977c77db5ca9f99e85eb2e4d35e493430df13851051a78dd085cd1cda929,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.33,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:27:46.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9199" for this suite. • [SLOW TEST:9.217 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":130,"skipped":1995,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:27:46.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 6 23:27:47.117: INFO: Waiting up to 5m0s for pod "downward-api-44a5c17f-84aa-4954-9b8e-6c2ffde08759" in namespace "downward-api-1230" to be "success or failure" May 6 23:27:47.286: INFO: Pod "downward-api-44a5c17f-84aa-4954-9b8e-6c2ffde08759": Phase="Pending", Reason="", readiness=false. Elapsed: 169.17702ms May 6 23:27:49.291: INFO: Pod "downward-api-44a5c17f-84aa-4954-9b8e-6c2ffde08759": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173802793s May 6 23:27:51.295: INFO: Pod "downward-api-44a5c17f-84aa-4954-9b8e-6c2ffde08759": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178033152s May 6 23:27:53.333: INFO: Pod "downward-api-44a5c17f-84aa-4954-9b8e-6c2ffde08759": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.215778701s STEP: Saw pod success May 6 23:27:53.333: INFO: Pod "downward-api-44a5c17f-84aa-4954-9b8e-6c2ffde08759" satisfied condition "success or failure" May 6 23:27:53.392: INFO: Trying to get logs from node jerma-worker2 pod downward-api-44a5c17f-84aa-4954-9b8e-6c2ffde08759 container dapi-container: STEP: delete the pod May 6 23:27:54.040: INFO: Waiting for pod downward-api-44a5c17f-84aa-4954-9b8e-6c2ffde08759 to disappear May 6 23:27:54.299: INFO: Pod downward-api-44a5c17f-84aa-4954-9b8e-6c2ffde08759 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:27:54.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1230" for this suite. • [SLOW TEST:7.613 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":1999,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:27:54.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 23:27:55.961: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 23:27:57.970: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404476, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404476, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404476, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404475, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:27:59.975: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404476, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404476, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404476, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404475, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 23:28:03.094: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:28:03.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2070" for this suite. STEP: Destroying namespace "webhook-2070-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.898 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":132,"skipped":2062,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:28:03.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-4784 STEP: creating a selector STEP: Creating the service pods in kubernetes May 6 23:28:03.608: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 6 23:28:26.156: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.36:8080/dial?request=hostname&protocol=http&host=10.244.1.131&port=8080&tries=1'] Namespace:pod-network-test-4784 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 23:28:26.156: INFO: >>> kubeConfig: /root/.kube/config I0506 23:28:26.190016 6 log.go:172] (0xc000ea4370) (0xc001a5fd60) Create stream I0506 23:28:26.190046 6 log.go:172] (0xc000ea4370) (0xc001a5fd60) Stream added, broadcasting: 1 I0506 23:28:26.192219 6 log.go:172] (0xc000ea4370) Reply frame received for 1 I0506 23:28:26.192252 6 log.go:172] (0xc000ea4370) (0xc0028a60a0) Create stream I0506 23:28:26.192264 6 log.go:172] (0xc000ea4370) (0xc0028a60a0) Stream added, broadcasting: 3 I0506 23:28:26.193658 6 log.go:172] (0xc000ea4370) Reply frame received for 3 I0506 23:28:26.193731 6 log.go:172] (0xc000ea4370) (0xc001a5fe00) Create stream I0506 23:28:26.193749 6 log.go:172] (0xc000ea4370) (0xc001a5fe00) Stream added, broadcasting: 5 I0506 23:28:26.194940 6 log.go:172] (0xc000ea4370) Reply frame received for 5 I0506 23:28:26.260701 6 log.go:172] (0xc000ea4370) Data frame received for 3 I0506 23:28:26.260728 6 log.go:172] (0xc0028a60a0) (3) Data frame handling I0506 23:28:26.260737 6 log.go:172] (0xc0028a60a0) (3) Data frame sent I0506 23:28:26.261698 6 log.go:172] (0xc000ea4370) Data frame received for 3 I0506 23:28:26.261741 6 log.go:172] (0xc0028a60a0) (3) Data frame handling I0506 23:28:26.261867 6 log.go:172] (0xc000ea4370) Data frame received for 5 I0506 23:28:26.261898 6 log.go:172] (0xc001a5fe00) (5) Data frame handling I0506 23:28:26.263023 6 log.go:172] (0xc000ea4370) Data frame received for 1 I0506 23:28:26.263039 6 log.go:172] (0xc001a5fd60) (1) Data frame handling I0506 23:28:26.263047 6 log.go:172] (0xc001a5fd60) (1) Data frame sent I0506 23:28:26.263057 6 log.go:172] (0xc000ea4370) (0xc001a5fd60) Stream removed, broadcasting: 1 I0506 23:28:26.263071 6 log.go:172] (0xc000ea4370) Go away received I0506 23:28:26.263183 6 log.go:172] (0xc000ea4370) (0xc001a5fd60) Stream removed, broadcasting: 1 I0506 23:28:26.263195 6 log.go:172] (0xc000ea4370) (0xc0028a60a0) Stream removed, broadcasting: 3 I0506 23:28:26.263201 6 log.go:172] (0xc000ea4370) (0xc001a5fe00) Stream removed, broadcasting: 5 May 6 23:28:26.263: INFO: Waiting for responses: map[] May 6 23:28:26.266: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.36:8080/dial?request=hostname&protocol=http&host=10.244.2.35&port=8080&tries=1'] Namespace:pod-network-test-4784 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 23:28:26.266: INFO: >>> kubeConfig: /root/.kube/config I0506 23:28:26.295793 6 log.go:172] (0xc000ea4a50) (0xc0028741e0) Create stream I0506 23:28:26.295822 6 log.go:172] (0xc000ea4a50) (0xc0028741e0) Stream added, broadcasting: 1 I0506 23:28:26.297990 6 log.go:172] (0xc000ea4a50) Reply frame received for 1 I0506 23:28:26.298017 6 log.go:172] (0xc000ea4a50) (0xc002874280) Create stream I0506 23:28:26.298035 6 log.go:172] (0xc000ea4a50) (0xc002874280) Stream added, broadcasting: 3 I0506 23:28:26.299047 6 log.go:172] (0xc000ea4a50) Reply frame received for 3 I0506 23:28:26.299071 6 log.go:172] (0xc000ea4a50) (0xc0028a6280) Create stream I0506 23:28:26.299079 6 log.go:172] (0xc000ea4a50) (0xc0028a6280) Stream added, broadcasting: 5 I0506 23:28:26.299973 6 log.go:172] (0xc000ea4a50) Reply frame received for 5 I0506 23:28:26.371627 6 log.go:172] (0xc000ea4a50) Data frame received for 3 I0506 23:28:26.371659 6 log.go:172] (0xc002874280) (3) Data frame handling I0506 23:28:26.371696 6 log.go:172] (0xc002874280) (3) Data frame sent I0506 23:28:26.372475 6 log.go:172] (0xc000ea4a50) Data frame received for 5 I0506 23:28:26.372512 6 log.go:172] (0xc0028a6280) (5) Data frame handling I0506 23:28:26.372684 6 log.go:172] (0xc000ea4a50) Data frame received for 3 I0506 23:28:26.372711 6 log.go:172] (0xc002874280) (3) Data frame handling I0506 23:28:26.374517 6 log.go:172] (0xc000ea4a50) Data frame received for 1 I0506 23:28:26.374550 6 log.go:172] (0xc0028741e0) (1) Data frame handling I0506 23:28:26.374592 6 log.go:172] (0xc0028741e0) (1) Data frame sent I0506 23:28:26.374631 6 log.go:172] (0xc000ea4a50) (0xc0028741e0) Stream removed, broadcasting: 1 I0506 23:28:26.374672 6 log.go:172] (0xc000ea4a50) Go away received I0506 23:28:26.374743 6 log.go:172] (0xc000ea4a50) (0xc0028741e0) Stream removed, broadcasting: 1 I0506 23:28:26.374773 6 log.go:172] (0xc000ea4a50) (0xc002874280) Stream removed, broadcasting: 3 I0506 23:28:26.374785 6 log.go:172] (0xc000ea4a50) (0xc0028a6280) Stream removed, broadcasting: 5 May 6 23:28:26.374: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:28:26.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4784" for this suite. • [SLOW TEST:22.988 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2079,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:28:26.383: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:28:26.658: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 6 23:28:28.852: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:28:30.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8728" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":134,"skipped":2117,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:28:30.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:28:38.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2267" for this suite. • [SLOW TEST:8.355 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2129,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:28:38.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:28:39.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2570' May 6 23:28:44.028: INFO: stderr: "" May 6 23:28:44.028: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 6 23:28:44.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2570' May 6 23:28:44.451: INFO: stderr: "" May 6 23:28:44.451: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 6 23:28:45.747: INFO: Selector matched 1 pods for map[app:agnhost] May 6 23:28:45.747: INFO: Found 0 / 1 May 6 23:28:46.651: INFO: Selector matched 1 pods for map[app:agnhost] May 6 23:28:46.651: INFO: Found 0 / 1 May 6 23:28:47.455: INFO: Selector matched 1 pods for map[app:agnhost] May 6 23:28:47.455: INFO: Found 0 / 1 May 6 23:28:48.478: INFO: Selector matched 1 pods for map[app:agnhost] May 6 23:28:48.478: INFO: Found 1 / 1 May 6 23:28:48.479: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 6 23:28:48.482: INFO: Selector matched 1 pods for map[app:agnhost] May 6 23:28:48.482: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 6 23:28:48.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-m4wbt --namespace=kubectl-2570' May 6 23:28:48.984: INFO: stderr: "" May 6 23:28:48.985: INFO: stdout: "Name: agnhost-master-m4wbt\nNamespace: kubectl-2570\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Wed, 06 May 2020 23:28:44 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.38\nIPs:\n IP: 10.244.2.38\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://3aa430f41adb8c8cdcb7b79261989e4dc57e0cf97faac9aba5f40d6fa53bc19a\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Wed, 06 May 2020 23:28:48 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-cf6kf (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-cf6kf:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-cf6kf\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-2570/agnhost-master-m4wbt to jerma-worker2\n Normal Pulled 2s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 0s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 0s kubelet, jerma-worker2 Started container agnhost-master\n" May 6 23:28:48.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-2570' May 6 23:28:49.122: INFO: stderr: "" May 6 23:28:49.122: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2570\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: agnhost-master-m4wbt\n" May 6 23:28:49.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-2570' May 6 23:28:49.221: INFO: stderr: "" May 6 23:28:49.221: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-2570\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.111.167.187\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.38:6379\nSession Affinity: None\nEvents: \n" May 6 23:28:49.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' May 6 23:28:49.337: INFO: stderr: "" May 6 23:28:49.337: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Wed, 06 May 2020 23:28:46 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Wed, 06 May 2020 23:25:42 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Wed, 06 May 2020 23:25:42 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Wed, 06 May 2020 23:25:42 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Wed, 06 May 2020 23:25:42 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 52d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 52d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 52d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 52d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 52d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 52d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 52d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 6 23:28:49.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-2570' May 6 23:28:49.444: INFO: stderr: "" May 6 23:28:49.444: INFO: stdout: "Name: kubectl-2570\nLabels: e2e-framework=kubectl\n e2e-run=dda4c29b-ce4d-4fdd-b877-1cb0da7a3874\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:28:49.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2570" for this suite. • [SLOW TEST:10.941 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":136,"skipped":2131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:28:49.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3241 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-3241 I0506 23:28:49.746985 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3241, replica count: 2 I0506 23:28:52.797498 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 23:28:55.797749 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 23:28:55.797: INFO: Creating new exec pod May 6 23:29:00.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3241 execpodlcsl8 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 6 23:29:01.098: INFO: stderr: "I0506 23:29:00.998033 2821 log.go:172] (0xc0005ee000) (0xc0005161e0) Create stream\nI0506 23:29:00.998121 2821 log.go:172] (0xc0005ee000) (0xc0005161e0) Stream added, broadcasting: 1\nI0506 23:29:00.999925 2821 log.go:172] (0xc0005ee000) Reply frame received for 1\nI0506 23:29:00.999971 2821 log.go:172] (0xc0005ee000) (0xc000662820) Create stream\nI0506 23:29:00.999983 2821 log.go:172] (0xc0005ee000) (0xc000662820) Stream added, broadcasting: 3\nI0506 23:29:01.001348 2821 log.go:172] (0xc0005ee000) Reply frame received for 3\nI0506 23:29:01.001516 2821 log.go:172] (0xc0005ee000) (0xc0003e35e0) Create stream\nI0506 23:29:01.001539 2821 log.go:172] (0xc0005ee000) (0xc0003e35e0) Stream added, broadcasting: 5\nI0506 23:29:01.002581 2821 log.go:172] (0xc0005ee000) Reply frame received for 5\nI0506 23:29:01.089919 2821 log.go:172] (0xc0005ee000) Data frame received for 5\nI0506 23:29:01.089965 2821 log.go:172] (0xc0003e35e0) (5) Data frame handling\nI0506 23:29:01.090057 2821 log.go:172] (0xc0003e35e0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0506 23:29:01.090315 2821 log.go:172] (0xc0005ee000) Data frame received for 5\nI0506 23:29:01.090345 2821 log.go:172] (0xc0003e35e0) (5) Data frame handling\nI0506 23:29:01.090381 2821 log.go:172] (0xc0005ee000) Data frame received for 3\nI0506 23:29:01.090418 2821 log.go:172] (0xc000662820) (3) Data frame handling\nI0506 23:29:01.092251 2821 log.go:172] (0xc0005ee000) Data frame received for 1\nI0506 23:29:01.092293 2821 log.go:172] (0xc0005161e0) (1) Data frame handling\nI0506 23:29:01.092327 2821 log.go:172] (0xc0005161e0) (1) Data frame sent\nI0506 23:29:01.092371 2821 log.go:172] (0xc0005ee000) (0xc0005161e0) Stream removed, broadcasting: 1\nI0506 23:29:01.092402 2821 log.go:172] (0xc0005ee000) Go away received\nI0506 23:29:01.092823 2821 log.go:172] (0xc0005ee000) (0xc0005161e0) Stream removed, broadcasting: 1\nI0506 23:29:01.092861 2821 log.go:172] (0xc0005ee000) (0xc000662820) Stream removed, broadcasting: 3\nI0506 23:29:01.092888 2821 log.go:172] (0xc0005ee000) (0xc0003e35e0) Stream removed, broadcasting: 5\n" May 6 23:29:01.098: INFO: stdout: "" May 6 23:29:01.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3241 execpodlcsl8 -- /bin/sh -x -c nc -zv -t -w 2 10.99.143.139 80' May 6 23:29:01.314: INFO: stderr: "I0506 23:29:01.237803 2843 log.go:172] (0xc0006f66e0) (0xc0006f5ae0) Create stream\nI0506 23:29:01.237878 2843 log.go:172] (0xc0006f66e0) (0xc0006f5ae0) Stream added, broadcasting: 1\nI0506 23:29:01.240633 2843 log.go:172] (0xc0006f66e0) Reply frame received for 1\nI0506 23:29:01.240674 2843 log.go:172] (0xc0006f66e0) (0xc0006f5cc0) Create stream\nI0506 23:29:01.240689 2843 log.go:172] (0xc0006f66e0) (0xc0006f5cc0) Stream added, broadcasting: 3\nI0506 23:29:01.241866 2843 log.go:172] (0xc0006f66e0) Reply frame received for 3\nI0506 23:29:01.241902 2843 log.go:172] (0xc0006f66e0) (0xc0006f5d60) Create stream\nI0506 23:29:01.241915 2843 log.go:172] (0xc0006f66e0) (0xc0006f5d60) Stream added, broadcasting: 5\nI0506 23:29:01.242801 2843 log.go:172] (0xc0006f66e0) Reply frame received for 5\nI0506 23:29:01.307220 2843 log.go:172] (0xc0006f66e0) Data frame received for 3\nI0506 23:29:01.307277 2843 log.go:172] (0xc0006f5cc0) (3) Data frame handling\nI0506 23:29:01.307314 2843 log.go:172] (0xc0006f66e0) Data frame received for 5\nI0506 23:29:01.307335 2843 log.go:172] (0xc0006f5d60) (5) Data frame handling\nI0506 23:29:01.307354 2843 log.go:172] (0xc0006f5d60) (5) Data frame sent\nI0506 23:29:01.307366 2843 log.go:172] (0xc0006f66e0) Data frame received for 5\nI0506 23:29:01.307376 2843 log.go:172] (0xc0006f5d60) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.143.139 80\nConnection to 10.99.143.139 80 port [tcp/http] succeeded!\nI0506 23:29:01.308659 2843 log.go:172] (0xc0006f66e0) Data frame received for 1\nI0506 23:29:01.308693 2843 log.go:172] (0xc0006f5ae0) (1) Data frame handling\nI0506 23:29:01.308728 2843 log.go:172] (0xc0006f5ae0) (1) Data frame sent\nI0506 23:29:01.308754 2843 log.go:172] (0xc0006f66e0) (0xc0006f5ae0) Stream removed, broadcasting: 1\nI0506 23:29:01.308828 2843 log.go:172] (0xc0006f66e0) Go away received\nI0506 23:29:01.309399 2843 log.go:172] (0xc0006f66e0) (0xc0006f5ae0) Stream removed, broadcasting: 1\nI0506 23:29:01.309420 2843 log.go:172] (0xc0006f66e0) (0xc0006f5cc0) Stream removed, broadcasting: 3\nI0506 23:29:01.309431 2843 log.go:172] (0xc0006f66e0) (0xc0006f5d60) Stream removed, broadcasting: 5\n" May 6 23:29:01.314: INFO: stdout: "" May 6 23:29:01.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3241 execpodlcsl8 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31385' May 6 23:29:01.511: INFO: stderr: "I0506 23:29:01.439432 2862 log.go:172] (0xc000af0000) (0xc00079a000) Create stream\nI0506 23:29:01.439513 2862 log.go:172] (0xc000af0000) (0xc00079a000) Stream added, broadcasting: 1\nI0506 23:29:01.441065 2862 log.go:172] (0xc000af0000) Reply frame received for 1\nI0506 23:29:01.441236 2862 log.go:172] (0xc000af0000) (0xc000888000) Create stream\nI0506 23:29:01.441262 2862 log.go:172] (0xc000af0000) (0xc000888000) Stream added, broadcasting: 3\nI0506 23:29:01.441996 2862 log.go:172] (0xc000af0000) Reply frame received for 3\nI0506 23:29:01.442034 2862 log.go:172] (0xc000af0000) (0xc0008880a0) Create stream\nI0506 23:29:01.442046 2862 log.go:172] (0xc000af0000) (0xc0008880a0) Stream added, broadcasting: 5\nI0506 23:29:01.442860 2862 log.go:172] (0xc000af0000) Reply frame received for 5\nI0506 23:29:01.504325 2862 log.go:172] (0xc000af0000) Data frame received for 5\nI0506 23:29:01.504356 2862 log.go:172] (0xc0008880a0) (5) Data frame handling\nI0506 23:29:01.504377 2862 log.go:172] (0xc0008880a0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 31385\nI0506 23:29:01.504452 2862 log.go:172] (0xc000af0000) Data frame received for 5\nI0506 23:29:01.504485 2862 log.go:172] (0xc0008880a0) (5) Data frame handling\nI0506 23:29:01.504500 2862 log.go:172] (0xc0008880a0) (5) Data frame sent\nConnection to 172.17.0.10 31385 port [tcp/31385] succeeded!\nI0506 23:29:01.504791 2862 log.go:172] (0xc000af0000) Data frame received for 5\nI0506 23:29:01.504806 2862 log.go:172] (0xc0008880a0) (5) Data frame handling\nI0506 23:29:01.504851 2862 log.go:172] (0xc000af0000) Data frame received for 3\nI0506 23:29:01.504874 2862 log.go:172] (0xc000888000) (3) Data frame handling\nI0506 23:29:01.506988 2862 log.go:172] (0xc000af0000) Data frame received for 1\nI0506 23:29:01.507009 2862 log.go:172] (0xc00079a000) (1) Data frame handling\nI0506 23:29:01.507019 2862 log.go:172] (0xc00079a000) (1) Data frame sent\nI0506 23:29:01.507029 2862 log.go:172] (0xc000af0000) (0xc00079a000) Stream removed, broadcasting: 1\nI0506 23:29:01.507105 2862 log.go:172] (0xc000af0000) Go away received\nI0506 23:29:01.507331 2862 log.go:172] (0xc000af0000) (0xc00079a000) Stream removed, broadcasting: 1\nI0506 23:29:01.507348 2862 log.go:172] (0xc000af0000) (0xc000888000) Stream removed, broadcasting: 3\nI0506 23:29:01.507356 2862 log.go:172] (0xc000af0000) (0xc0008880a0) Stream removed, broadcasting: 5\n" May 6 23:29:01.512: INFO: stdout: "" May 6 23:29:01.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3241 execpodlcsl8 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31385' May 6 23:29:01.746: INFO: stderr: "I0506 23:29:01.643787 2881 log.go:172] (0xc0000f5340) (0xc000607a40) Create stream\nI0506 23:29:01.643841 2881 log.go:172] (0xc0000f5340) (0xc000607a40) Stream added, broadcasting: 1\nI0506 23:29:01.647068 2881 log.go:172] (0xc0000f5340) Reply frame received for 1\nI0506 23:29:01.647118 2881 log.go:172] (0xc0000f5340) (0xc000716000) Create stream\nI0506 23:29:01.647134 2881 log.go:172] (0xc0000f5340) (0xc000716000) Stream added, broadcasting: 3\nI0506 23:29:01.648285 2881 log.go:172] (0xc0000f5340) Reply frame received for 3\nI0506 23:29:01.648312 2881 log.go:172] (0xc0000f5340) (0xc000607ae0) Create stream\nI0506 23:29:01.648321 2881 log.go:172] (0xc0000f5340) (0xc000607ae0) Stream added, broadcasting: 5\nI0506 23:29:01.649904 2881 log.go:172] (0xc0000f5340) Reply frame received for 5\nI0506 23:29:01.738150 2881 log.go:172] (0xc0000f5340) Data frame received for 3\nI0506 23:29:01.738190 2881 log.go:172] (0xc000716000) (3) Data frame handling\nI0506 23:29:01.738227 2881 log.go:172] (0xc0000f5340) Data frame received for 5\nI0506 23:29:01.738265 2881 log.go:172] (0xc000607ae0) (5) Data frame handling\nI0506 23:29:01.738286 2881 log.go:172] (0xc000607ae0) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 31385\nConnection to 172.17.0.8 31385 port [tcp/31385] succeeded!\nI0506 23:29:01.738305 2881 log.go:172] (0xc0000f5340) Data frame received for 5\nI0506 23:29:01.738347 2881 log.go:172] (0xc000607ae0) (5) Data frame handling\nI0506 23:29:01.739965 2881 log.go:172] (0xc0000f5340) Data frame received for 1\nI0506 23:29:01.739989 2881 log.go:172] (0xc000607a40) (1) Data frame handling\nI0506 23:29:01.740003 2881 log.go:172] (0xc000607a40) (1) Data frame sent\nI0506 23:29:01.740024 2881 log.go:172] (0xc0000f5340) (0xc000607a40) Stream removed, broadcasting: 1\nI0506 23:29:01.740057 2881 log.go:172] (0xc0000f5340) Go away received\nI0506 23:29:01.740554 2881 log.go:172] (0xc0000f5340) (0xc000607a40) Stream removed, broadcasting: 1\nI0506 23:29:01.740580 2881 log.go:172] (0xc0000f5340) (0xc000716000) Stream removed, broadcasting: 3\nI0506 23:29:01.740592 2881 log.go:172] (0xc0000f5340) (0xc000607ae0) Stream removed, broadcasting: 5\n" May 6 23:29:01.746: INFO: stdout: "" May 6 23:29:01.746: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:29:01.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3241" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.375 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":137,"skipped":2160,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:29:01.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 6 23:29:01.882: INFO: Waiting up to 5m0s for pod "pod-e0ff1c5d-4b47-44e4-a086-8401c383c3b5" in namespace "emptydir-2359" to be "success or failure" May 6 23:29:01.886: INFO: Pod "pod-e0ff1c5d-4b47-44e4-a086-8401c383c3b5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.68277ms May 6 23:29:03.891: INFO: Pod "pod-e0ff1c5d-4b47-44e4-a086-8401c383c3b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008237517s May 6 23:29:05.895: INFO: Pod "pod-e0ff1c5d-4b47-44e4-a086-8401c383c3b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012616089s STEP: Saw pod success May 6 23:29:05.895: INFO: Pod "pod-e0ff1c5d-4b47-44e4-a086-8401c383c3b5" satisfied condition "success or failure" May 6 23:29:05.898: INFO: Trying to get logs from node jerma-worker2 pod pod-e0ff1c5d-4b47-44e4-a086-8401c383c3b5 container test-container: STEP: delete the pod May 6 23:29:05.970: INFO: Waiting for pod pod-e0ff1c5d-4b47-44e4-a086-8401c383c3b5 to disappear May 6 23:29:05.982: INFO: Pod pod-e0ff1c5d-4b47-44e4-a086-8401c383c3b5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:29:05.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2359" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2164,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:29:05.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:29:22.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4534" for this suite. • [SLOW TEST:16.597 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":139,"skipped":2206,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:29:22.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 6 23:29:27.349: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:29:27.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5199" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2221,"failed":0} SSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:29:27.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode May 6 23:29:27.421: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-6857" to be "success or failure" May 6 23:29:28.021: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 600.194458ms May 6 23:29:30.167: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.745723928s May 6 23:29:32.287: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.865889833s May 6 23:29:34.292: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.871356928s STEP: Saw pod success May 6 23:29:34.292: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 6 23:29:34.296: INFO: Trying to get logs from node jerma-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 6 23:29:34.341: INFO: Waiting for pod pod-host-path-test to disappear May 6 23:29:34.352: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:29:34.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-6857" for this suite. • [SLOW TEST:6.988 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2228,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:29:34.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 6 23:29:34.414: INFO: Waiting up to 5m0s for pod "pod-006d5d63-70a8-4d00-aa62-7c70114fc4dd" in namespace "emptydir-3068" to be "success or failure" May 6 23:29:34.436: INFO: Pod "pod-006d5d63-70a8-4d00-aa62-7c70114fc4dd": Phase="Pending", Reason="", readiness=false. Elapsed: 21.573943ms May 6 23:29:36.440: INFO: Pod "pod-006d5d63-70a8-4d00-aa62-7c70114fc4dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025781935s May 6 23:29:38.444: INFO: Pod "pod-006d5d63-70a8-4d00-aa62-7c70114fc4dd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029893785s May 6 23:29:40.580: INFO: Pod "pod-006d5d63-70a8-4d00-aa62-7c70114fc4dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.166091137s STEP: Saw pod success May 6 23:29:40.580: INFO: Pod "pod-006d5d63-70a8-4d00-aa62-7c70114fc4dd" satisfied condition "success or failure" May 6 23:29:40.586: INFO: Trying to get logs from node jerma-worker pod pod-006d5d63-70a8-4d00-aa62-7c70114fc4dd container test-container: STEP: delete the pod May 6 23:29:40.875: INFO: Waiting for pod pod-006d5d63-70a8-4d00-aa62-7c70114fc4dd to disappear May 6 23:29:41.084: INFO: Pod pod-006d5d63-70a8-4d00-aa62-7c70114fc4dd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:29:41.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3068" for this suite. • [SLOW TEST:6.733 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2232,"failed":0} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:29:41.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:29:45.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1054" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2236,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:29:45.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9895 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-9895 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9895 May 6 23:29:45.510: INFO: Found 0 stateful pods, waiting for 1 May 6 23:29:55.515: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 6 23:29:55.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 23:29:55.768: INFO: stderr: "I0506 23:29:55.669657 2903 log.go:172] (0xc00010b550) (0xc00068dae0) Create stream\nI0506 23:29:55.669717 2903 log.go:172] (0xc00010b550) (0xc00068dae0) Stream added, broadcasting: 1\nI0506 23:29:55.671701 2903 log.go:172] (0xc00010b550) Reply frame received for 1\nI0506 23:29:55.671737 2903 log.go:172] (0xc00010b550) (0xc00066c000) Create stream\nI0506 23:29:55.671751 2903 log.go:172] (0xc00010b550) (0xc00066c000) Stream added, broadcasting: 3\nI0506 23:29:55.672593 2903 log.go:172] (0xc00010b550) Reply frame received for 3\nI0506 23:29:55.672645 2903 log.go:172] (0xc00010b550) (0xc00066c140) Create stream\nI0506 23:29:55.672658 2903 log.go:172] (0xc00010b550) (0xc00066c140) Stream added, broadcasting: 5\nI0506 23:29:55.673795 2903 log.go:172] (0xc00010b550) Reply frame received for 5\nI0506 23:29:55.727772 2903 log.go:172] (0xc00010b550) Data frame received for 5\nI0506 23:29:55.727793 2903 log.go:172] (0xc00066c140) (5) Data frame handling\nI0506 23:29:55.727804 2903 log.go:172] (0xc00066c140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 23:29:55.759527 2903 log.go:172] (0xc00010b550) Data frame received for 3\nI0506 23:29:55.759562 2903 log.go:172] (0xc00066c000) (3) Data frame handling\nI0506 23:29:55.759575 2903 log.go:172] (0xc00066c000) (3) Data frame sent\nI0506 23:29:55.759644 2903 log.go:172] (0xc00010b550) Data frame received for 5\nI0506 23:29:55.759672 2903 log.go:172] (0xc00066c140) (5) Data frame handling\nI0506 23:29:55.759854 2903 log.go:172] (0xc00010b550) Data frame received for 3\nI0506 23:29:55.759882 2903 log.go:172] (0xc00066c000) (3) Data frame handling\nI0506 23:29:55.762235 2903 log.go:172] (0xc00010b550) Data frame received for 1\nI0506 23:29:55.762262 2903 log.go:172] (0xc00068dae0) (1) Data frame handling\nI0506 23:29:55.762286 2903 log.go:172] (0xc00068dae0) (1) Data frame sent\nI0506 23:29:55.762316 2903 log.go:172] (0xc00010b550) (0xc00068dae0) Stream removed, broadcasting: 1\nI0506 23:29:55.762333 2903 log.go:172] (0xc00010b550) Go away received\nI0506 23:29:55.762775 2903 log.go:172] (0xc00010b550) (0xc00068dae0) Stream removed, broadcasting: 1\nI0506 23:29:55.762795 2903 log.go:172] (0xc00010b550) (0xc00066c000) Stream removed, broadcasting: 3\nI0506 23:29:55.762804 2903 log.go:172] (0xc00010b550) (0xc00066c140) Stream removed, broadcasting: 5\n" May 6 23:29:55.768: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 23:29:55.768: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 23:29:55.771: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 6 23:30:05.775: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 23:30:05.775: INFO: Waiting for statefulset status.replicas updated to 0 May 6 23:30:05.797: INFO: POD NODE PHASE GRACE CONDITIONS May 6 23:30:05.797: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:56 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC }] May 6 23:30:05.797: INFO: May 6 23:30:05.797: INFO: StatefulSet ss has not reached scale 3, at 1 May 6 23:30:06.802: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.986548505s May 6 23:30:07.806: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.9814679s May 6 23:30:08.910: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.977522165s May 6 23:30:09.916: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.873679136s May 6 23:30:10.921: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.868235291s May 6 23:30:11.925: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.862488127s May 6 23:30:12.930: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.85912684s May 6 23:30:13.936: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.853632673s May 6 23:30:14.941: INFO: Verifying statefulset ss doesn't scale past 3 for another 848.167227ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9895 May 6 23:30:15.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:30:16.173: INFO: stderr: "I0506 23:30:16.083268 2926 log.go:172] (0xc000a77970) (0xc000ac86e0) Create stream\nI0506 23:30:16.083327 2926 log.go:172] (0xc000a77970) (0xc000ac86e0) Stream added, broadcasting: 1\nI0506 23:30:16.087451 2926 log.go:172] (0xc000a77970) Reply frame received for 1\nI0506 23:30:16.087481 2926 log.go:172] (0xc000a77970) (0xc000582640) Create stream\nI0506 23:30:16.087489 2926 log.go:172] (0xc000a77970) (0xc000582640) Stream added, broadcasting: 3\nI0506 23:30:16.088363 2926 log.go:172] (0xc000a77970) Reply frame received for 3\nI0506 23:30:16.088413 2926 log.go:172] (0xc000a77970) (0xc000659180) Create stream\nI0506 23:30:16.088431 2926 log.go:172] (0xc000a77970) (0xc000659180) Stream added, broadcasting: 5\nI0506 23:30:16.089457 2926 log.go:172] (0xc000a77970) Reply frame received for 5\nI0506 23:30:16.167232 2926 log.go:172] (0xc000a77970) Data frame received for 5\nI0506 23:30:16.167329 2926 log.go:172] (0xc000659180) (5) Data frame handling\nI0506 23:30:16.167371 2926 log.go:172] (0xc000659180) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0506 23:30:16.167400 2926 log.go:172] (0xc000a77970) Data frame received for 3\nI0506 23:30:16.167411 2926 log.go:172] (0xc000582640) (3) Data frame handling\nI0506 23:30:16.167422 2926 log.go:172] (0xc000582640) (3) Data frame sent\nI0506 23:30:16.167430 2926 log.go:172] (0xc000a77970) Data frame received for 3\nI0506 23:30:16.167436 2926 log.go:172] (0xc000582640) (3) Data frame handling\nI0506 23:30:16.167495 2926 log.go:172] (0xc000a77970) Data frame received for 5\nI0506 23:30:16.167521 2926 log.go:172] (0xc000659180) (5) Data frame handling\nI0506 23:30:16.169006 2926 log.go:172] (0xc000a77970) Data frame received for 1\nI0506 23:30:16.169029 2926 log.go:172] (0xc000ac86e0) (1) Data frame handling\nI0506 23:30:16.169038 2926 log.go:172] (0xc000ac86e0) (1) Data frame sent\nI0506 23:30:16.169046 2926 log.go:172] (0xc000a77970) (0xc000ac86e0) Stream removed, broadcasting: 1\nI0506 23:30:16.169063 2926 log.go:172] (0xc000a77970) Go away received\nI0506 23:30:16.169601 2926 log.go:172] (0xc000a77970) (0xc000ac86e0) Stream removed, broadcasting: 1\nI0506 23:30:16.169620 2926 log.go:172] (0xc000a77970) (0xc000582640) Stream removed, broadcasting: 3\nI0506 23:30:16.169629 2926 log.go:172] (0xc000a77970) (0xc000659180) Stream removed, broadcasting: 5\n" May 6 23:30:16.174: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 23:30:16.174: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 23:30:16.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:30:16.394: INFO: stderr: "I0506 23:30:16.321946 2946 log.go:172] (0xc0001051e0) (0xc000a46000) Create stream\nI0506 23:30:16.322017 2946 log.go:172] (0xc0001051e0) (0xc000a46000) Stream added, broadcasting: 1\nI0506 23:30:16.324804 2946 log.go:172] (0xc0001051e0) Reply frame received for 1\nI0506 23:30:16.324866 2946 log.go:172] (0xc0001051e0) (0xc0006ff9a0) Create stream\nI0506 23:30:16.324896 2946 log.go:172] (0xc0001051e0) (0xc0006ff9a0) Stream added, broadcasting: 3\nI0506 23:30:16.326246 2946 log.go:172] (0xc0001051e0) Reply frame received for 3\nI0506 23:30:16.326297 2946 log.go:172] (0xc0001051e0) (0xc0002e2000) Create stream\nI0506 23:30:16.326313 2946 log.go:172] (0xc0001051e0) (0xc0002e2000) Stream added, broadcasting: 5\nI0506 23:30:16.327417 2946 log.go:172] (0xc0001051e0) Reply frame received for 5\nI0506 23:30:16.387142 2946 log.go:172] (0xc0001051e0) Data frame received for 3\nI0506 23:30:16.387187 2946 log.go:172] (0xc0006ff9a0) (3) Data frame handling\nI0506 23:30:16.387200 2946 log.go:172] (0xc0006ff9a0) (3) Data frame sent\nI0506 23:30:16.387208 2946 log.go:172] (0xc0001051e0) Data frame received for 3\nI0506 23:30:16.387215 2946 log.go:172] (0xc0006ff9a0) (3) Data frame handling\nI0506 23:30:16.387247 2946 log.go:172] (0xc0001051e0) Data frame received for 5\nI0506 23:30:16.387256 2946 log.go:172] (0xc0002e2000) (5) Data frame handling\nI0506 23:30:16.387273 2946 log.go:172] (0xc0002e2000) (5) Data frame sent\nI0506 23:30:16.387282 2946 log.go:172] (0xc0001051e0) Data frame received for 5\nI0506 23:30:16.387289 2946 log.go:172] (0xc0002e2000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0506 23:30:16.389252 2946 log.go:172] (0xc0001051e0) Data frame received for 1\nI0506 23:30:16.389312 2946 log.go:172] (0xc000a46000) (1) Data frame handling\nI0506 23:30:16.389334 2946 log.go:172] (0xc000a46000) (1) Data frame sent\nI0506 23:30:16.389353 2946 log.go:172] (0xc0001051e0) (0xc000a46000) Stream removed, broadcasting: 1\nI0506 23:30:16.389375 2946 log.go:172] (0xc0001051e0) Go away received\nI0506 23:30:16.389912 2946 log.go:172] (0xc0001051e0) (0xc000a46000) Stream removed, broadcasting: 1\nI0506 23:30:16.389947 2946 log.go:172] (0xc0001051e0) (0xc0006ff9a0) Stream removed, broadcasting: 3\nI0506 23:30:16.389965 2946 log.go:172] (0xc0001051e0) (0xc0002e2000) Stream removed, broadcasting: 5\n" May 6 23:30:16.395: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 23:30:16.395: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 23:30:16.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:30:16.600: INFO: stderr: "I0506 23:30:16.538291 2967 log.go:172] (0xc0001042c0) (0xc000b94000) Create stream\nI0506 23:30:16.538377 2967 log.go:172] (0xc0001042c0) (0xc000b94000) Stream added, broadcasting: 1\nI0506 23:30:16.540378 2967 log.go:172] (0xc0001042c0) Reply frame received for 1\nI0506 23:30:16.540419 2967 log.go:172] (0xc0001042c0) (0xc0005bb540) Create stream\nI0506 23:30:16.540436 2967 log.go:172] (0xc0001042c0) (0xc0005bb540) Stream added, broadcasting: 3\nI0506 23:30:16.541687 2967 log.go:172] (0xc0001042c0) Reply frame received for 3\nI0506 23:30:16.541722 2967 log.go:172] (0xc0001042c0) (0xc000b940a0) Create stream\nI0506 23:30:16.541736 2967 log.go:172] (0xc0001042c0) (0xc000b940a0) Stream added, broadcasting: 5\nI0506 23:30:16.542594 2967 log.go:172] (0xc0001042c0) Reply frame received for 5\nI0506 23:30:16.591755 2967 log.go:172] (0xc0001042c0) Data frame received for 5\nI0506 23:30:16.591840 2967 log.go:172] (0xc000b940a0) (5) Data frame handling\nI0506 23:30:16.591864 2967 log.go:172] (0xc000b940a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0506 23:30:16.591889 2967 log.go:172] (0xc0001042c0) Data frame received for 3\nI0506 23:30:16.591909 2967 log.go:172] (0xc0005bb540) (3) Data frame handling\nI0506 23:30:16.591932 2967 log.go:172] (0xc0005bb540) (3) Data frame sent\nI0506 23:30:16.591947 2967 log.go:172] (0xc0001042c0) Data frame received for 3\nI0506 23:30:16.591956 2967 log.go:172] (0xc0005bb540) (3) Data frame handling\nI0506 23:30:16.592031 2967 log.go:172] (0xc0001042c0) Data frame received for 5\nI0506 23:30:16.592050 2967 log.go:172] (0xc000b940a0) (5) Data frame handling\nI0506 23:30:16.594227 2967 log.go:172] (0xc0001042c0) Data frame received for 1\nI0506 23:30:16.594252 2967 log.go:172] (0xc000b94000) (1) Data frame handling\nI0506 23:30:16.594271 2967 log.go:172] (0xc000b94000) (1) Data frame sent\nI0506 23:30:16.594410 2967 log.go:172] (0xc0001042c0) (0xc000b94000) Stream removed, broadcasting: 1\nI0506 23:30:16.594553 2967 log.go:172] (0xc0001042c0) Go away received\nI0506 23:30:16.594759 2967 log.go:172] (0xc0001042c0) (0xc000b94000) Stream removed, broadcasting: 1\nI0506 23:30:16.594788 2967 log.go:172] (0xc0001042c0) (0xc0005bb540) Stream removed, broadcasting: 3\nI0506 23:30:16.594796 2967 log.go:172] (0xc0001042c0) (0xc000b940a0) Stream removed, broadcasting: 5\n" May 6 23:30:16.600: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 6 23:30:16.600: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 6 23:30:16.606: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 6 23:30:16.606: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 6 23:30:16.606: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 6 23:30:16.611: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 23:30:16.830: INFO: stderr: "I0506 23:30:16.755564 2988 log.go:172] (0xc00010b550) (0xc000932000) Create stream\nI0506 23:30:16.755648 2988 log.go:172] (0xc00010b550) (0xc000932000) Stream added, broadcasting: 1\nI0506 23:30:16.758472 2988 log.go:172] (0xc00010b550) Reply frame received for 1\nI0506 23:30:16.758498 2988 log.go:172] (0xc00010b550) (0xc000651ae0) Create stream\nI0506 23:30:16.758505 2988 log.go:172] (0xc00010b550) (0xc000651ae0) Stream added, broadcasting: 3\nI0506 23:30:16.759345 2988 log.go:172] (0xc00010b550) Reply frame received for 3\nI0506 23:30:16.759391 2988 log.go:172] (0xc00010b550) (0xc0009320a0) Create stream\nI0506 23:30:16.759403 2988 log.go:172] (0xc00010b550) (0xc0009320a0) Stream added, broadcasting: 5\nI0506 23:30:16.760490 2988 log.go:172] (0xc00010b550) Reply frame received for 5\nI0506 23:30:16.823112 2988 log.go:172] (0xc00010b550) Data frame received for 5\nI0506 23:30:16.823156 2988 log.go:172] (0xc0009320a0) (5) Data frame handling\nI0506 23:30:16.823174 2988 log.go:172] (0xc0009320a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 23:30:16.823201 2988 log.go:172] (0xc00010b550) Data frame received for 3\nI0506 23:30:16.823222 2988 log.go:172] (0xc000651ae0) (3) Data frame handling\nI0506 23:30:16.823240 2988 log.go:172] (0xc000651ae0) (3) Data frame sent\nI0506 23:30:16.823413 2988 log.go:172] (0xc00010b550) Data frame received for 3\nI0506 23:30:16.823442 2988 log.go:172] (0xc000651ae0) (3) Data frame handling\nI0506 23:30:16.823646 2988 log.go:172] (0xc00010b550) Data frame received for 5\nI0506 23:30:16.823664 2988 log.go:172] (0xc0009320a0) (5) Data frame handling\nI0506 23:30:16.825464 2988 log.go:172] (0xc00010b550) Data frame received for 1\nI0506 23:30:16.825494 2988 log.go:172] (0xc000932000) (1) Data frame handling\nI0506 23:30:16.825515 2988 log.go:172] (0xc000932000) (1) Data frame sent\nI0506 23:30:16.825531 2988 log.go:172] (0xc00010b550) (0xc000932000) Stream removed, broadcasting: 1\nI0506 23:30:16.825676 2988 log.go:172] (0xc00010b550) Go away received\nI0506 23:30:16.825906 2988 log.go:172] (0xc00010b550) (0xc000932000) Stream removed, broadcasting: 1\nI0506 23:30:16.825924 2988 log.go:172] (0xc00010b550) (0xc000651ae0) Stream removed, broadcasting: 3\nI0506 23:30:16.825934 2988 log.go:172] (0xc00010b550) (0xc0009320a0) Stream removed, broadcasting: 5\n" May 6 23:30:16.830: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 23:30:16.830: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 23:30:16.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 23:30:17.109: INFO: stderr: "I0506 23:30:16.991372 3009 log.go:172] (0xc000940160) (0xc0002a75e0) Create stream\nI0506 23:30:16.991438 3009 log.go:172] (0xc000940160) (0xc0002a75e0) Stream added, broadcasting: 1\nI0506 23:30:16.994007 3009 log.go:172] (0xc000940160) Reply frame received for 1\nI0506 23:30:16.994052 3009 log.go:172] (0xc000940160) (0xc0006afd60) Create stream\nI0506 23:30:16.994064 3009 log.go:172] (0xc000940160) (0xc0006afd60) Stream added, broadcasting: 3\nI0506 23:30:16.994922 3009 log.go:172] (0xc000940160) Reply frame received for 3\nI0506 23:30:16.994949 3009 log.go:172] (0xc000940160) (0xc0006afe00) Create stream\nI0506 23:30:16.994958 3009 log.go:172] (0xc000940160) (0xc0006afe00) Stream added, broadcasting: 5\nI0506 23:30:16.995850 3009 log.go:172] (0xc000940160) Reply frame received for 5\nI0506 23:30:17.069086 3009 log.go:172] (0xc000940160) Data frame received for 5\nI0506 23:30:17.069252 3009 log.go:172] (0xc0006afe00) (5) Data frame handling\nI0506 23:30:17.069303 3009 log.go:172] (0xc0006afe00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 23:30:17.101100 3009 log.go:172] (0xc000940160) Data frame received for 3\nI0506 23:30:17.101330 3009 log.go:172] (0xc000940160) Data frame received for 5\nI0506 23:30:17.101376 3009 log.go:172] (0xc0006afe00) (5) Data frame handling\nI0506 23:30:17.101409 3009 log.go:172] (0xc0006afd60) (3) Data frame handling\nI0506 23:30:17.101435 3009 log.go:172] (0xc0006afd60) (3) Data frame sent\nI0506 23:30:17.101583 3009 log.go:172] (0xc000940160) Data frame received for 3\nI0506 23:30:17.101624 3009 log.go:172] (0xc0006afd60) (3) Data frame handling\nI0506 23:30:17.103377 3009 log.go:172] (0xc000940160) Data frame received for 1\nI0506 23:30:17.103411 3009 log.go:172] (0xc0002a75e0) (1) Data frame handling\nI0506 23:30:17.103452 3009 log.go:172] (0xc0002a75e0) (1) Data frame sent\nI0506 23:30:17.103479 3009 log.go:172] (0xc000940160) (0xc0002a75e0) Stream removed, broadcasting: 1\nI0506 23:30:17.103791 3009 log.go:172] (0xc000940160) Go away received\nI0506 23:30:17.103895 3009 log.go:172] (0xc000940160) (0xc0002a75e0) Stream removed, broadcasting: 1\nI0506 23:30:17.103916 3009 log.go:172] (0xc000940160) (0xc0006afd60) Stream removed, broadcasting: 3\nI0506 23:30:17.103928 3009 log.go:172] (0xc000940160) (0xc0006afe00) Stream removed, broadcasting: 5\n" May 6 23:30:17.109: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 23:30:17.109: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 23:30:17.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 6 23:30:17.340: INFO: stderr: "I0506 23:30:17.228887 3030 log.go:172] (0xc0000ee370) (0xc0002e7c20) Create stream\nI0506 23:30:17.228937 3030 log.go:172] (0xc0000ee370) (0xc0002e7c20) Stream added, broadcasting: 1\nI0506 23:30:17.230877 3030 log.go:172] (0xc0000ee370) Reply frame received for 1\nI0506 23:30:17.230902 3030 log.go:172] (0xc0000ee370) (0xc000902000) Create stream\nI0506 23:30:17.230910 3030 log.go:172] (0xc0000ee370) (0xc000902000) Stream added, broadcasting: 3\nI0506 23:30:17.231525 3030 log.go:172] (0xc0000ee370) Reply frame received for 3\nI0506 23:30:17.231539 3030 log.go:172] (0xc0000ee370) (0xc0002e7cc0) Create stream\nI0506 23:30:17.231546 3030 log.go:172] (0xc0000ee370) (0xc0002e7cc0) Stream added, broadcasting: 5\nI0506 23:30:17.232241 3030 log.go:172] (0xc0000ee370) Reply frame received for 5\nI0506 23:30:17.287885 3030 log.go:172] (0xc0000ee370) Data frame received for 5\nI0506 23:30:17.287908 3030 log.go:172] (0xc0002e7cc0) (5) Data frame handling\nI0506 23:30:17.287921 3030 log.go:172] (0xc0002e7cc0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0506 23:30:17.329411 3030 log.go:172] (0xc0000ee370) Data frame received for 3\nI0506 23:30:17.329673 3030 log.go:172] (0xc000902000) (3) Data frame handling\nI0506 23:30:17.329698 3030 log.go:172] (0xc000902000) (3) Data frame sent\nI0506 23:30:17.329845 3030 log.go:172] (0xc0000ee370) Data frame received for 3\nI0506 23:30:17.329871 3030 log.go:172] (0xc000902000) (3) Data frame handling\nI0506 23:30:17.329906 3030 log.go:172] (0xc0000ee370) Data frame received for 5\nI0506 23:30:17.329924 3030 log.go:172] (0xc0002e7cc0) (5) Data frame handling\nI0506 23:30:17.332618 3030 log.go:172] (0xc0000ee370) Data frame received for 1\nI0506 23:30:17.332648 3030 log.go:172] (0xc0002e7c20) (1) Data frame handling\nI0506 23:30:17.332676 3030 log.go:172] (0xc0002e7c20) (1) Data frame sent\nI0506 23:30:17.332697 3030 log.go:172] (0xc0000ee370) (0xc0002e7c20) Stream removed, broadcasting: 1\nI0506 23:30:17.332730 3030 log.go:172] (0xc0000ee370) Go away received\nI0506 23:30:17.333465 3030 log.go:172] (0xc0000ee370) (0xc0002e7c20) Stream removed, broadcasting: 1\nI0506 23:30:17.333512 3030 log.go:172] (0xc0000ee370) (0xc000902000) Stream removed, broadcasting: 3\nI0506 23:30:17.333542 3030 log.go:172] (0xc0000ee370) (0xc0002e7cc0) Stream removed, broadcasting: 5\n" May 6 23:30:17.340: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 6 23:30:17.340: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 6 23:30:17.340: INFO: Waiting for statefulset status.replicas updated to 0 May 6 23:30:17.343: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 6 23:30:27.372: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 6 23:30:27.372: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 6 23:30:27.372: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 6 23:30:27.385: INFO: POD NODE PHASE GRACE CONDITIONS May 6 23:30:27.385: INFO: ss-0 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC }] May 6 23:30:27.386: INFO: ss-1 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC }] May 6 23:30:27.386: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC }] May 6 23:30:27.386: INFO: May 6 23:30:27.386: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 23:30:28.389: INFO: POD NODE PHASE GRACE CONDITIONS May 6 23:30:28.389: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC }] May 6 23:30:28.389: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC }] May 6 23:30:28.389: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC }] May 6 23:30:28.389: INFO: May 6 23:30:28.389: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 23:30:29.393: INFO: POD NODE PHASE GRACE CONDITIONS May 6 23:30:29.393: INFO: ss-0 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC }] May 6 23:30:29.394: INFO: ss-1 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC }] May 6 23:30:29.394: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC }] May 6 23:30:29.394: INFO: May 6 23:30:29.394: INFO: StatefulSet ss has not reached scale 0, at 3 May 6 23:30:30.398: INFO: POD NODE PHASE GRACE CONDITIONS May 6 23:30:30.398: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC }] May 6 23:30:30.398: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC }] May 6 23:30:30.398: INFO: May 6 23:30:30.398: INFO: StatefulSet ss has not reached scale 0, at 2 May 6 23:30:31.402: INFO: POD NODE PHASE GRACE CONDITIONS May 6 23:30:31.402: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC }] May 6 23:30:31.402: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC }] May 6 23:30:31.402: INFO: May 6 23:30:31.402: INFO: StatefulSet ss has not reached scale 0, at 2 May 6 23:30:32.407: INFO: POD NODE PHASE GRACE CONDITIONS May 6 23:30:32.407: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC }] May 6 23:30:32.407: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC }] May 6 23:30:32.407: INFO: May 6 23:30:32.407: INFO: StatefulSet ss has not reached scale 0, at 2 May 6 23:30:33.412: INFO: POD NODE PHASE GRACE CONDITIONS May 6 23:30:33.412: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC }] May 6 23:30:33.412: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC }] May 6 23:30:33.412: INFO: May 6 23:30:33.412: INFO: StatefulSet ss has not reached scale 0, at 2 May 6 23:30:34.415: INFO: POD NODE PHASE GRACE CONDITIONS May 6 23:30:34.415: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC }] May 6 23:30:34.416: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC }] May 6 23:30:34.416: INFO: May 6 23:30:34.416: INFO: StatefulSet ss has not reached scale 0, at 2 May 6 23:30:35.427: INFO: POD NODE PHASE GRACE CONDITIONS May 6 23:30:35.427: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC }] May 6 23:30:35.427: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC }] May 6 23:30:35.427: INFO: May 6 23:30:35.427: INFO: StatefulSet ss has not reached scale 0, at 2 May 6 23:30:36.431: INFO: POD NODE PHASE GRACE CONDITIONS May 6 23:30:36.431: INFO: ss-0 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:17 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:29:45 +0000 UTC }] May 6 23:30:36.431: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:18 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-06 23:30:05 +0000 UTC }] May 6 23:30:36.431: INFO: May 6 23:30:36.431: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9895 May 6 23:30:37.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:30:37.616: INFO: rc: 1 May 6 23:30:37.616: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 6 23:30:47.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:30:47.715: INFO: rc: 1 May 6 23:30:47.715: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:30:57.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:30:57.816: INFO: rc: 1 May 6 23:30:57.816: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:31:07.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:31:07.902: INFO: rc: 1 May 6 23:31:07.902: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:31:17.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:31:18.000: INFO: rc: 1 May 6 23:31:18.000: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:31:28.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:31:28.098: INFO: rc: 1 May 6 23:31:28.099: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:31:38.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:31:38.225: INFO: rc: 1 May 6 23:31:38.225: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:31:48.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:31:48.335: INFO: rc: 1 May 6 23:31:48.335: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:31:58.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:31:58.561: INFO: rc: 1 May 6 23:31:58.562: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:32:08.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:32:08.669: INFO: rc: 1 May 6 23:32:08.669: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:32:18.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:32:18.769: INFO: rc: 1 May 6 23:32:18.769: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:32:28.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:32:28.867: INFO: rc: 1 May 6 23:32:28.867: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:32:38.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:32:38.964: INFO: rc: 1 May 6 23:32:38.964: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:32:48.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:32:49.061: INFO: rc: 1 May 6 23:32:49.061: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:32:59.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:32:59.168: INFO: rc: 1 May 6 23:32:59.168: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:33:09.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:33:09.260: INFO: rc: 1 May 6 23:33:09.260: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:33:19.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:33:19.470: INFO: rc: 1 May 6 23:33:19.470: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:33:29.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:33:29.607: INFO: rc: 1 May 6 23:33:29.607: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:33:39.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:33:39.711: INFO: rc: 1 May 6 23:33:39.711: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:33:49.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:33:49.807: INFO: rc: 1 May 6 23:33:49.807: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:33:59.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:34:00.149: INFO: rc: 1 May 6 23:34:00.149: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:34:10.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:34:10.240: INFO: rc: 1 May 6 23:34:10.240: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:34:20.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:34:20.876: INFO: rc: 1 May 6 23:34:20.876: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:34:30.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:34:30.967: INFO: rc: 1 May 6 23:34:30.967: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:34:40.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:34:41.185: INFO: rc: 1 May 6 23:34:41.185: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:34:51.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:34:51.287: INFO: rc: 1 May 6 23:34:51.287: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:35:01.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:35:01.487: INFO: rc: 1 May 6 23:35:01.487: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:35:11.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:35:11.596: INFO: rc: 1 May 6 23:35:11.596: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:35:21.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:35:21.699: INFO: rc: 1 May 6 23:35:21.699: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:35:31.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:35:31.806: INFO: rc: 1 May 6 23:35:31.806: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 6 23:35:41.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9895 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 6 23:35:41.908: INFO: rc: 1 May 6 23:35:41.908: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: May 6 23:35:41.908: INFO: Scaling statefulset ss to 0 May 6 23:35:41.916: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 6 23:35:41.918: INFO: Deleting all statefulset in ns statefulset-9895 May 6 23:35:41.920: INFO: Scaling statefulset ss to 0 May 6 23:35:41.928: INFO: Waiting for statefulset status.replicas updated to 0 May 6 23:35:41.930: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:35:41.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9895" for this suite. • [SLOW TEST:356.561 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":144,"skipped":2237,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:35:41.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-4fab0eb0-65a8-4386-ae76-e3e38e131b57 STEP: Creating a pod to test consume secrets May 6 23:35:42.162: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8030c234-3806-4e04-ab38-523cdcb6aeb2" in namespace "projected-4764" to be "success or failure" May 6 23:35:42.199: INFO: Pod "pod-projected-secrets-8030c234-3806-4e04-ab38-523cdcb6aeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 37.187538ms May 6 23:35:44.204: INFO: Pod "pod-projected-secrets-8030c234-3806-4e04-ab38-523cdcb6aeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041883187s May 6 23:35:46.242: INFO: Pod "pod-projected-secrets-8030c234-3806-4e04-ab38-523cdcb6aeb2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080196628s May 6 23:35:48.253: INFO: Pod "pod-projected-secrets-8030c234-3806-4e04-ab38-523cdcb6aeb2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.091265175s STEP: Saw pod success May 6 23:35:48.253: INFO: Pod "pod-projected-secrets-8030c234-3806-4e04-ab38-523cdcb6aeb2" satisfied condition "success or failure" May 6 23:35:48.277: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-8030c234-3806-4e04-ab38-523cdcb6aeb2 container projected-secret-volume-test: STEP: delete the pod May 6 23:35:48.618: INFO: Waiting for pod pod-projected-secrets-8030c234-3806-4e04-ab38-523cdcb6aeb2 to disappear May 6 23:35:48.654: INFO: Pod pod-projected-secrets-8030c234-3806-4e04-ab38-523cdcb6aeb2 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:35:48.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4764" for this suite. • [SLOW TEST:6.837 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2256,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:35:48.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 23:35:49.733: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 23:35:51.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404949, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404949, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404949, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404949, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:35:53.748: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404949, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404949, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404949, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724404949, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 23:35:57.073: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:36:09.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9758" for this suite. STEP: Destroying namespace "webhook-9758-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:20.757 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":146,"skipped":2270,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:36:09.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-7519.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-7519.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7519.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-7519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-7519.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-7519.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-7519.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7519.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 23:36:15.724: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:15.727: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:15.730: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:15.732: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:15.738: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:15.740: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:15.742: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:15.744: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:15.748: INFO: Lookups using dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7519.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7519.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local jessie_udp@dns-test-service-2.dns-7519.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7519.svc.cluster.local] May 6 23:36:20.753: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:20.756: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:20.760: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:20.763: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:20.773: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:20.776: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:20.779: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:20.782: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:20.788: INFO: Lookups using dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7519.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7519.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local jessie_udp@dns-test-service-2.dns-7519.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7519.svc.cluster.local] May 6 23:36:25.754: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:25.757: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:25.760: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:25.763: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:25.771: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:25.773: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:25.776: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:25.778: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:25.784: INFO: Lookups using dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7519.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7519.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local jessie_udp@dns-test-service-2.dns-7519.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7519.svc.cluster.local] May 6 23:36:30.752: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:30.755: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:30.758: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:30.761: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:30.769: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:30.772: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:30.775: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:30.778: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:30.784: INFO: Lookups using dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7519.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7519.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local jessie_udp@dns-test-service-2.dns-7519.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7519.svc.cluster.local] May 6 23:36:35.754: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:35.758: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:35.761: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:35.764: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:35.776: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:35.779: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:35.782: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:35.784: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:35.789: INFO: Lookups using dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7519.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7519.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local jessie_udp@dns-test-service-2.dns-7519.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7519.svc.cluster.local] May 6 23:36:40.892: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:40.895: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:40.897: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:41.125: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:41.174: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:41.176: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:41.179: INFO: Unable to read jessie_udp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:41.182: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-7519.svc.cluster.local from pod dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a: the server could not find the requested resource (get pods dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a) May 6 23:36:41.188: INFO: Lookups using dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local wheezy_udp@dns-test-service-2.dns-7519.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-7519.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-7519.svc.cluster.local jessie_udp@dns-test-service-2.dns-7519.svc.cluster.local jessie_tcp@dns-test-service-2.dns-7519.svc.cluster.local] May 6 23:36:45.839: INFO: DNS probes using dns-7519/dns-test-34d41f0e-e16d-44f2-85ee-da986292a56a succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:36:46.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7519" for this suite. • [SLOW TEST:37.413 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":147,"skipped":2274,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:36:46.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:36:47.097: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:36:56.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5195" for this suite. • [SLOW TEST:9.449 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":148,"skipped":2302,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:36:56.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 23:36:56.562: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27a5dcf3-471d-4eda-ae5a-29aeb55de598" in namespace "downward-api-2359" to be "success or failure" May 6 23:36:56.636: INFO: Pod "downwardapi-volume-27a5dcf3-471d-4eda-ae5a-29aeb55de598": Phase="Pending", Reason="", readiness=false. Elapsed: 74.728105ms May 6 23:36:58.641: INFO: Pod "downwardapi-volume-27a5dcf3-471d-4eda-ae5a-29aeb55de598": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079282489s May 6 23:37:00.645: INFO: Pod "downwardapi-volume-27a5dcf3-471d-4eda-ae5a-29aeb55de598": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08317095s STEP: Saw pod success May 6 23:37:00.645: INFO: Pod "downwardapi-volume-27a5dcf3-471d-4eda-ae5a-29aeb55de598" satisfied condition "success or failure" May 6 23:37:00.647: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-27a5dcf3-471d-4eda-ae5a-29aeb55de598 container client-container: STEP: delete the pod May 6 23:37:01.017: INFO: Waiting for pod downwardapi-volume-27a5dcf3-471d-4eda-ae5a-29aeb55de598 to disappear May 6 23:37:01.020: INFO: Pod downwardapi-volume-27a5dcf3-471d-4eda-ae5a-29aeb55de598 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:37:01.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2359" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2308,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:37:01.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 6 23:37:11.358: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 23:37:11.401: INFO: Pod pod-with-poststart-exec-hook still exists May 6 23:37:13.401: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 23:37:13.406: INFO: Pod pod-with-poststart-exec-hook still exists May 6 23:37:15.402: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 23:37:15.419: INFO: Pod pod-with-poststart-exec-hook still exists May 6 23:37:17.402: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 23:37:17.406: INFO: Pod pod-with-poststart-exec-hook still exists May 6 23:37:19.402: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 6 23:37:20.839: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:37:20.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2969" for this suite. • [SLOW TEST:19.949 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2339,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:37:20.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-29510498-c84f-4bc5-96f0-981507c8f176 STEP: Creating configMap with name cm-test-opt-upd-5e531bed-5c90-4b84-a401-507398e1154f STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-29510498-c84f-4bc5-96f0-981507c8f176 STEP: Updating configmap cm-test-opt-upd-5e531bed-5c90-4b84-a401-507398e1154f STEP: Creating configMap with name cm-test-opt-create-e6b04088-07ff-411d-babc-e3eb6f90b63b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:37:42.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5293" for this suite. • [SLOW TEST:21.676 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2353,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:37:42.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-ee627fe9-e947-4427-bd8e-c5a2cdaa7dd8 in namespace container-probe-4375 May 6 23:37:48.800: INFO: Started pod busybox-ee627fe9-e947-4427-bd8e-c5a2cdaa7dd8 in namespace container-probe-4375 STEP: checking the pod's current state and verifying that restartCount is present May 6 23:37:48.803: INFO: Initial restart count of pod busybox-ee627fe9-e947-4427-bd8e-c5a2cdaa7dd8 is 0 May 6 23:38:37.973: INFO: Restart count of pod container-probe-4375/busybox-ee627fe9-e947-4427-bd8e-c5a2cdaa7dd8 is now 1 (49.16968831s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:38:38.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4375" for this suite. • [SLOW TEST:55.374 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2364,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:38:38.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args May 6 23:38:38.158: INFO: Waiting up to 5m0s for pod "var-expansion-e7ec705f-0f99-43f4-80c3-50d39d4da123" in namespace "var-expansion-6839" to be "success or failure" May 6 23:38:38.167: INFO: Pod "var-expansion-e7ec705f-0f99-43f4-80c3-50d39d4da123": Phase="Pending", Reason="", readiness=false. Elapsed: 8.961972ms May 6 23:38:40.216: INFO: Pod "var-expansion-e7ec705f-0f99-43f4-80c3-50d39d4da123": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058503226s May 6 23:38:42.223: INFO: Pod "var-expansion-e7ec705f-0f99-43f4-80c3-50d39d4da123": Phase="Running", Reason="", readiness=true. Elapsed: 4.064955592s May 6 23:38:44.226: INFO: Pod "var-expansion-e7ec705f-0f99-43f4-80c3-50d39d4da123": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068525099s STEP: Saw pod success May 6 23:38:44.226: INFO: Pod "var-expansion-e7ec705f-0f99-43f4-80c3-50d39d4da123" satisfied condition "success or failure" May 6 23:38:44.228: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-e7ec705f-0f99-43f4-80c3-50d39d4da123 container dapi-container: STEP: delete the pod May 6 23:38:44.266: INFO: Waiting for pod var-expansion-e7ec705f-0f99-43f4-80c3-50d39d4da123 to disappear May 6 23:38:44.298: INFO: Pod var-expansion-e7ec705f-0f99-43f4-80c3-50d39d4da123 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:38:44.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6839" for this suite. • [SLOW TEST:6.255 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2419,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:38:44.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 6 23:38:44.383: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8840 /api/v1/namespaces/watch-8840/configmaps/e2e-watch-test-configmap-a bf69a25d-ec31-4661-9c52-b40b7420c5f0 14031426 0 2020-05-06 23:38:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 6 23:38:44.384: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8840 /api/v1/namespaces/watch-8840/configmaps/e2e-watch-test-configmap-a bf69a25d-ec31-4661-9c52-b40b7420c5f0 14031426 0 2020-05-06 23:38:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 6 23:38:54.392: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8840 /api/v1/namespaces/watch-8840/configmaps/e2e-watch-test-configmap-a bf69a25d-ec31-4661-9c52-b40b7420c5f0 14031464 0 2020-05-06 23:38:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 6 23:38:54.393: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8840 /api/v1/namespaces/watch-8840/configmaps/e2e-watch-test-configmap-a bf69a25d-ec31-4661-9c52-b40b7420c5f0 14031464 0 2020-05-06 23:38:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 6 23:39:04.401: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8840 /api/v1/namespaces/watch-8840/configmaps/e2e-watch-test-configmap-a bf69a25d-ec31-4661-9c52-b40b7420c5f0 14031494 0 2020-05-06 23:38:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 6 23:39:04.401: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8840 /api/v1/namespaces/watch-8840/configmaps/e2e-watch-test-configmap-a bf69a25d-ec31-4661-9c52-b40b7420c5f0 14031494 0 2020-05-06 23:38:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 6 23:39:14.410: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8840 /api/v1/namespaces/watch-8840/configmaps/e2e-watch-test-configmap-a bf69a25d-ec31-4661-9c52-b40b7420c5f0 14031524 0 2020-05-06 23:38:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 6 23:39:14.410: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8840 /api/v1/namespaces/watch-8840/configmaps/e2e-watch-test-configmap-a bf69a25d-ec31-4661-9c52-b40b7420c5f0 14031524 0 2020-05-06 23:38:44 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 6 23:39:24.418: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8840 /api/v1/namespaces/watch-8840/configmaps/e2e-watch-test-configmap-b 802da9a3-1907-4f28-b853-2ce3c4a52905 14031555 0 2020-05-06 23:39:24 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 6 23:39:24.419: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8840 /api/v1/namespaces/watch-8840/configmaps/e2e-watch-test-configmap-b 802da9a3-1907-4f28-b853-2ce3c4a52905 14031555 0 2020-05-06 23:39:24 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 6 23:39:34.426: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8840 /api/v1/namespaces/watch-8840/configmaps/e2e-watch-test-configmap-b 802da9a3-1907-4f28-b853-2ce3c4a52905 14031586 0 2020-05-06 23:39:24 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 6 23:39:34.426: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8840 /api/v1/namespaces/watch-8840/configmaps/e2e-watch-test-configmap-b 802da9a3-1907-4f28-b853-2ce3c4a52905 14031586 0 2020-05-06 23:39:24 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:39:44.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8840" for this suite. • [SLOW TEST:60.129 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":154,"skipped":2447,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:39:44.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 23:39:44.496: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25cdb4f5-6c0b-416f-8e52-db220689b0b8" in namespace "downward-api-6377" to be "success or failure" May 6 23:39:44.540: INFO: Pod "downwardapi-volume-25cdb4f5-6c0b-416f-8e52-db220689b0b8": Phase="Pending", Reason="", readiness=false. Elapsed: 44.568355ms May 6 23:39:46.667: INFO: Pod "downwardapi-volume-25cdb4f5-6c0b-416f-8e52-db220689b0b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170765302s May 6 23:39:48.674: INFO: Pod "downwardapi-volume-25cdb4f5-6c0b-416f-8e52-db220689b0b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.178565595s STEP: Saw pod success May 6 23:39:48.674: INFO: Pod "downwardapi-volume-25cdb4f5-6c0b-416f-8e52-db220689b0b8" satisfied condition "success or failure" May 6 23:39:48.678: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-25cdb4f5-6c0b-416f-8e52-db220689b0b8 container client-container: STEP: delete the pod May 6 23:39:48.718: INFO: Waiting for pod downwardapi-volume-25cdb4f5-6c0b-416f-8e52-db220689b0b8 to disappear May 6 23:39:48.751: INFO: Pod downwardapi-volume-25cdb4f5-6c0b-416f-8e52-db220689b0b8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:39:48.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6377" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2458,"failed":0} SSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:39:48.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:39:48.862: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-b6f7b907-5fa6-48b6-a901-ce241ce328f8" in namespace "security-context-test-5737" to be "success or failure" May 6 23:39:48.902: INFO: Pod "busybox-privileged-false-b6f7b907-5fa6-48b6-a901-ce241ce328f8": Phase="Pending", Reason="", readiness=false. Elapsed: 40.177744ms May 6 23:39:50.936: INFO: Pod "busybox-privileged-false-b6f7b907-5fa6-48b6-a901-ce241ce328f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07361312s May 6 23:39:52.940: INFO: Pod "busybox-privileged-false-b6f7b907-5fa6-48b6-a901-ce241ce328f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077843886s May 6 23:39:52.940: INFO: Pod "busybox-privileged-false-b6f7b907-5fa6-48b6-a901-ce241ce328f8" satisfied condition "success or failure" May 6 23:39:52.947: INFO: Got logs for pod "busybox-privileged-false-b6f7b907-5fa6-48b6-a901-ce241ce328f8": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:39:52.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5737" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2462,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:39:52.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:39:59.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2835" for this suite. STEP: Destroying namespace "nsdeletetest-9753" for this suite. May 6 23:39:59.523: INFO: Namespace nsdeletetest-9753 was already deleted STEP: Destroying namespace "nsdeletetest-1687" for this suite. • [SLOW TEST:6.570 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":157,"skipped":2492,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:39:59.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 23:39:59.967: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 23:40:01.978: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405200, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405200, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405200, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405199, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 23:40:05.015: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:40:15.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3927" for this suite. STEP: Destroying namespace "webhook-3927-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.832 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":158,"skipped":2499,"failed":0} SS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:40:15.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:40:15.443: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 6 23:40:20.457: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 6 23:40:20.457: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 6 23:40:22.460: INFO: Creating deployment "test-rollover-deployment" May 6 23:40:22.475: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 6 23:40:24.638: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 6 23:40:25.698: INFO: Ensure that both replica sets have 1 created replica May 6 23:40:26.296: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 6 23:40:26.341: INFO: Updating deployment test-rollover-deployment May 6 23:40:26.341: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 6 23:40:28.587: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 6 23:40:28.593: INFO: Make sure deployment "test-rollover-deployment" is complete May 6 23:40:28.602: INFO: all replica sets need to contain the pod-template-hash label May 6 23:40:28.602: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405227, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:40:30.608: INFO: all replica sets need to contain the pod-template-hash label May 6 23:40:30.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405227, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:40:32.609: INFO: all replica sets need to contain the pod-template-hash label May 6 23:40:32.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405230, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:40:34.616: INFO: all replica sets need to contain the pod-template-hash label May 6 23:40:34.616: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405230, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:40:36.610: INFO: all replica sets need to contain the pod-template-hash label May 6 23:40:36.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405230, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:40:38.628: INFO: all replica sets need to contain the pod-template-hash label May 6 23:40:38.628: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405230, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:40:40.635: INFO: all replica sets need to contain the pod-template-hash label May 6 23:40:40.635: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405230, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405222, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:40:42.610: INFO: May 6 23:40:42.610: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 6 23:40:42.617: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-9277 /apis/apps/v1/namespaces/deployment-9277/deployments/test-rollover-deployment 462ed441-f718-4d80-8f03-b749c144ee4b 14032014 2 2020-05-06 23:40:22 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0036da7f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-06 23:40:22 +0000 UTC,LastTransitionTime:2020-05-06 23:40:22 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-05-06 23:40:41 +0000 UTC,LastTransitionTime:2020-05-06 23:40:22 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 6 23:40:42.620: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-9277 /apis/apps/v1/namespaces/deployment-9277/replicasets/test-rollover-deployment-574d6dfbff ed00bb4e-fe9d-4017-8844-8eb75ecd198c 14032003 2 2020-05-06 23:40:26 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 462ed441-f718-4d80-8f03-b749c144ee4b 0xc0034d6fc7 0xc0034d6fc8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034d7038 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 6 23:40:42.620: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 6 23:40:42.620: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-9277 /apis/apps/v1/namespaces/deployment-9277/replicasets/test-rollover-controller adb34a3c-d0a5-4edf-ac3b-c545723aaba7 14032012 2 2020-05-06 23:40:15 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 462ed441-f718-4d80-8f03-b749c144ee4b 0xc0034d6ef7 0xc0034d6ef8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0034d6f58 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 23:40:42.620: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-9277 /apis/apps/v1/namespaces/deployment-9277/replicasets/test-rollover-deployment-f6c94f66c 207926f0-36b1-4043-98bb-e485d4c50b8d 14031947 2 2020-05-06 23:40:22 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 462ed441-f718-4d80-8f03-b749c144ee4b 0xc0034d70a0 0xc0034d70a1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0034d7118 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 23:40:42.622: INFO: Pod "test-rollover-deployment-574d6dfbff-fx976" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-fx976 test-rollover-deployment-574d6dfbff- deployment-9277 /api/v1/namespaces/deployment-9277/pods/test-rollover-deployment-574d6dfbff-fx976 ef9c0220-d384-4cdb-909e-ab89b6ccf604 14031971 0 2020-05-06 23:40:27 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff ed00bb4e-fe9d-4017-8844-8eb75ecd198c 0xc0034d7677 0xc0034d7678}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9jlzt,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9jlzt,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9jlzt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:40:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:40:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:40:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:40:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.149,StartTime:2020-05-06 23:40:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-06 23:40:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://001ae6118c3219248075040316b9a58dd5dc23d123bdb30d80d687ad5f84e7d8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.149,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:40:42.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9277" for this suite. • [SLOW TEST:27.270 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":159,"skipped":2501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:40:42.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-586bfcb5-94ae-4263-827a-881aab0bee9f STEP: Creating a pod to test consume configMaps May 6 23:40:42.770: INFO: Waiting up to 5m0s for pod "pod-configmaps-91833755-8d77-4a52-82e4-fadf5deb0589" in namespace "configmap-7989" to be "success or failure" May 6 23:40:42.786: INFO: Pod "pod-configmaps-91833755-8d77-4a52-82e4-fadf5deb0589": Phase="Pending", Reason="", readiness=false. Elapsed: 16.003472ms May 6 23:40:44.790: INFO: Pod "pod-configmaps-91833755-8d77-4a52-82e4-fadf5deb0589": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019802058s May 6 23:40:46.794: INFO: Pod "pod-configmaps-91833755-8d77-4a52-82e4-fadf5deb0589": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023999928s STEP: Saw pod success May 6 23:40:46.794: INFO: Pod "pod-configmaps-91833755-8d77-4a52-82e4-fadf5deb0589" satisfied condition "success or failure" May 6 23:40:46.797: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-91833755-8d77-4a52-82e4-fadf5deb0589 container configmap-volume-test: STEP: delete the pod May 6 23:40:46.855: INFO: Waiting for pod pod-configmaps-91833755-8d77-4a52-82e4-fadf5deb0589 to disappear May 6 23:40:46.877: INFO: Pod pod-configmaps-91833755-8d77-4a52-82e4-fadf5deb0589 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:40:46.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7989" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2584,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:40:46.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-12d2e3a4-8b40-4171-a449-f2094fc85b61 in namespace container-probe-7314 May 6 23:40:51.225: INFO: Started pod busybox-12d2e3a4-8b40-4171-a449-f2094fc85b61 in namespace container-probe-7314 STEP: checking the pod's current state and verifying that restartCount is present May 6 23:40:51.227: INFO: Initial restart count of pod busybox-12d2e3a4-8b40-4171-a449-f2094fc85b61 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:44:52.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7314" for this suite. • [SLOW TEST:246.100 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2602,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:44:52.986: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:44:53.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 6 23:44:53.439: INFO: stderr: "" May 6 23:44:53.439: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:23:43Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:44:53.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3535" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":162,"skipped":2615,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:44:53.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:45:53.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7371" for this suite. • [SLOW TEST:60.098 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2626,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:45:53.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command May 6 23:45:53.746: INFO: Waiting up to 5m0s for pod "var-expansion-ef3f01ef-b01b-4cb7-8a79-0b8d579fdc5e" in namespace "var-expansion-3374" to be "success or failure" May 6 23:45:53.851: INFO: Pod "var-expansion-ef3f01ef-b01b-4cb7-8a79-0b8d579fdc5e": Phase="Pending", Reason="", readiness=false. Elapsed: 104.619665ms May 6 23:45:55.855: INFO: Pod "var-expansion-ef3f01ef-b01b-4cb7-8a79-0b8d579fdc5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108782078s May 6 23:45:57.859: INFO: Pod "var-expansion-ef3f01ef-b01b-4cb7-8a79-0b8d579fdc5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.112808732s STEP: Saw pod success May 6 23:45:57.859: INFO: Pod "var-expansion-ef3f01ef-b01b-4cb7-8a79-0b8d579fdc5e" satisfied condition "success or failure" May 6 23:45:57.862: INFO: Trying to get logs from node jerma-worker pod var-expansion-ef3f01ef-b01b-4cb7-8a79-0b8d579fdc5e container dapi-container: STEP: delete the pod May 6 23:45:57.916: INFO: Waiting for pod var-expansion-ef3f01ef-b01b-4cb7-8a79-0b8d579fdc5e to disappear May 6 23:45:57.958: INFO: Pod var-expansion-ef3f01ef-b01b-4cb7-8a79-0b8d579fdc5e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:45:57.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3374" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2641,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:45:57.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-824940e3-c3e3-43da-b004-d00f2a3e4f50 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:45:58.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1617" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":165,"skipped":2653,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:45:58.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 23:45:58.513: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 23:46:00.540: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405558, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405558, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405558, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405558, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 23:46:04.032: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:46:04.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8037-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:46:05.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6119" for this suite. STEP: Destroying namespace "webhook-6119-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.199 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":166,"skipped":2661,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:46:05.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 6 23:46:05.380: INFO: Waiting up to 5m0s for pod "pod-0366ea69-ed68-4bc4-bb31-d1d6aa26b2d4" in namespace "emptydir-8532" to be "success or failure" May 6 23:46:05.405: INFO: Pod "pod-0366ea69-ed68-4bc4-bb31-d1d6aa26b2d4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.783459ms May 6 23:46:07.409: INFO: Pod "pod-0366ea69-ed68-4bc4-bb31-d1d6aa26b2d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02853926s May 6 23:46:09.414: INFO: Pod "pod-0366ea69-ed68-4bc4-bb31-d1d6aa26b2d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034405582s STEP: Saw pod success May 6 23:46:09.415: INFO: Pod "pod-0366ea69-ed68-4bc4-bb31-d1d6aa26b2d4" satisfied condition "success or failure" May 6 23:46:09.417: INFO: Trying to get logs from node jerma-worker pod pod-0366ea69-ed68-4bc4-bb31-d1d6aa26b2d4 container test-container: STEP: delete the pod May 6 23:46:09.466: INFO: Waiting for pod pod-0366ea69-ed68-4bc4-bb31-d1d6aa26b2d4 to disappear May 6 23:46:09.474: INFO: Pod pod-0366ea69-ed68-4bc4-bb31-d1d6aa26b2d4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:46:09.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8532" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2661,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:46:09.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3511 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3511 STEP: Creating statefulset with conflicting port in namespace statefulset-3511 STEP: Waiting until pod test-pod will start running in namespace statefulset-3511 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3511 May 6 23:46:14.087: INFO: Observed stateful pod in namespace: statefulset-3511, name: ss-0, uid: 3750162f-190a-4414-98ec-01c24cd66c97, status phase: Pending. Waiting for statefulset controller to delete. May 6 23:46:14.404: INFO: Observed stateful pod in namespace: statefulset-3511, name: ss-0, uid: 3750162f-190a-4414-98ec-01c24cd66c97, status phase: Failed. Waiting for statefulset controller to delete. May 6 23:46:14.434: INFO: Observed stateful pod in namespace: statefulset-3511, name: ss-0, uid: 3750162f-190a-4414-98ec-01c24cd66c97, status phase: Failed. Waiting for statefulset controller to delete. May 6 23:46:14.466: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3511 STEP: Removing pod with conflicting port in namespace statefulset-3511 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3511 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 6 23:46:20.533: INFO: Deleting all statefulset in ns statefulset-3511 May 6 23:46:20.536: INFO: Scaling statefulset ss to 0 May 6 23:46:30.571: INFO: Waiting for statefulset status.replicas updated to 0 May 6 23:46:30.574: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:46:30.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3511" for this suite. • [SLOW TEST:21.439 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":168,"skipped":2732,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:46:30.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:46:30.994: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:46:31.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-106" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":169,"skipped":2736,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:46:31.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-f2bf3dfa-e38c-4b0e-952f-bbf37ee8076c STEP: Creating a pod to test consume configMaps May 6 23:46:31.705: INFO: Waiting up to 5m0s for pod "pod-configmaps-817a72f8-ec6d-4d18-a191-0bbc83936f54" in namespace "configmap-5949" to be "success or failure" May 6 23:46:31.731: INFO: Pod "pod-configmaps-817a72f8-ec6d-4d18-a191-0bbc83936f54": Phase="Pending", Reason="", readiness=false. Elapsed: 25.998796ms May 6 23:46:33.735: INFO: Pod "pod-configmaps-817a72f8-ec6d-4d18-a191-0bbc83936f54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029873649s May 6 23:46:35.739: INFO: Pod "pod-configmaps-817a72f8-ec6d-4d18-a191-0bbc83936f54": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033786013s May 6 23:46:37.744: INFO: Pod "pod-configmaps-817a72f8-ec6d-4d18-a191-0bbc83936f54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038096901s STEP: Saw pod success May 6 23:46:37.744: INFO: Pod "pod-configmaps-817a72f8-ec6d-4d18-a191-0bbc83936f54" satisfied condition "success or failure" May 6 23:46:37.746: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-817a72f8-ec6d-4d18-a191-0bbc83936f54 container configmap-volume-test: STEP: delete the pod May 6 23:46:37.781: INFO: Waiting for pod pod-configmaps-817a72f8-ec6d-4d18-a191-0bbc83936f54 to disappear May 6 23:46:37.785: INFO: Pod pod-configmaps-817a72f8-ec6d-4d18-a191-0bbc83936f54 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:46:37.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5949" for this suite. • [SLOW TEST:6.177 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2749,"failed":0} S ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:46:37.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-be0db8f9-c282-49bd-9048-e6d880f6fbdf [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:46:37.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5980" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":171,"skipped":2750,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:46:37.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 6 23:46:38.697: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 6 23:46:40.720: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405598, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405598, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405598, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405598, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:46:42.724: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405598, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405598, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405598, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724405598, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 23:46:45.882: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:46:45.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:46:48.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-3053" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:10.756 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":172,"skipped":2768,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:46:48.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components May 6 23:46:49.428: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 6 23:46:49.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4001' May 6 23:46:54.906: INFO: stderr: "" May 6 23:46:54.906: INFO: stdout: "service/agnhost-slave created\n" May 6 23:46:54.906: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 6 23:46:54.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4001' May 6 23:46:55.251: INFO: stderr: "" May 6 23:46:55.251: INFO: stdout: "service/agnhost-master created\n" May 6 23:46:55.251: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 6 23:46:55.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4001' May 6 23:46:55.557: INFO: stderr: "" May 6 23:46:55.557: INFO: stdout: "service/frontend created\n" May 6 23:46:55.557: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 6 23:46:55.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4001' May 6 23:46:55.862: INFO: stderr: "" May 6 23:46:55.862: INFO: stdout: "deployment.apps/frontend created\n" May 6 23:46:55.862: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 6 23:46:55.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4001' May 6 23:46:56.213: INFO: stderr: "" May 6 23:46:56.213: INFO: stdout: "deployment.apps/agnhost-master created\n" May 6 23:46:56.214: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 6 23:46:56.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4001' May 6 23:46:56.489: INFO: stderr: "" May 6 23:46:56.489: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 6 23:46:56.489: INFO: Waiting for all frontend pods to be Running. May 6 23:47:06.540: INFO: Waiting for frontend to serve content. May 6 23:47:06.552: INFO: Trying to add a new entry to the guestbook. May 6 23:47:06.563: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 6 23:47:06.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4001' May 6 23:47:06.742: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 23:47:06.743: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 6 23:47:06.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4001' May 6 23:47:06.930: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 23:47:06.930: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 6 23:47:06.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4001' May 6 23:47:07.057: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 23:47:07.057: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 6 23:47:07.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4001' May 6 23:47:07.164: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 23:47:07.164: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 6 23:47:07.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4001' May 6 23:47:07.261: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 23:47:07.261: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 6 23:47:07.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4001' May 6 23:47:07.356: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 6 23:47:07.356: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:47:07.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4001" for this suite. • [SLOW TEST:18.698 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":173,"skipped":2828,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:47:07.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3016 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 6 23:47:07.447: INFO: Found 0 stateful pods, waiting for 3 May 6 23:47:17.452: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 23:47:17.452: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 23:47:17.452: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 6 23:47:27.451: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 23:47:27.451: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 23:47:27.451: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 6 23:47:27.477: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 6 23:47:37.538: INFO: Updating stateful set ss2 May 6 23:47:37.551: INFO: Waiting for Pod statefulset-3016/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 6 23:47:47.558: INFO: Waiting for Pod statefulset-3016/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 6 23:47:57.979: INFO: Found 2 stateful pods, waiting for 3 May 6 23:48:07.985: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 6 23:48:07.985: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 6 23:48:07.985: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 6 23:48:08.010: INFO: Updating stateful set ss2 May 6 23:48:08.075: INFO: Waiting for Pod statefulset-3016/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 6 23:48:18.082: INFO: Waiting for Pod statefulset-3016/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 6 23:48:28.099: INFO: Updating stateful set ss2 May 6 23:48:28.159: INFO: Waiting for StatefulSet statefulset-3016/ss2 to complete update May 6 23:48:28.160: INFO: Waiting for Pod statefulset-3016/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 6 23:48:38.168: INFO: Waiting for StatefulSet statefulset-3016/ss2 to complete update May 6 23:48:38.168: INFO: Waiting for Pod statefulset-3016/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 6 23:48:48.166: INFO: Deleting all statefulset in ns statefulset-3016 May 6 23:48:48.169: INFO: Scaling statefulset ss2 to 0 May 6 23:49:18.193: INFO: Waiting for statefulset status.replicas updated to 0 May 6 23:49:18.196: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:49:18.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3016" for this suite. • [SLOW TEST:130.853 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":174,"skipped":2837,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:49:18.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-2062dbab-8593-4b34-bb11-f7d7d89f435b STEP: Creating a pod to test consume secrets May 6 23:49:18.280: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5f14ef50-fba8-4a4c-be34-714842db3787" in namespace "projected-7749" to be "success or failure" May 6 23:49:18.309: INFO: Pod "pod-projected-secrets-5f14ef50-fba8-4a4c-be34-714842db3787": Phase="Pending", Reason="", readiness=false. Elapsed: 29.648482ms May 6 23:49:20.344: INFO: Pod "pod-projected-secrets-5f14ef50-fba8-4a4c-be34-714842db3787": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064001331s May 6 23:49:22.436: INFO: Pod "pod-projected-secrets-5f14ef50-fba8-4a4c-be34-714842db3787": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.156022138s STEP: Saw pod success May 6 23:49:22.436: INFO: Pod "pod-projected-secrets-5f14ef50-fba8-4a4c-be34-714842db3787" satisfied condition "success or failure" May 6 23:49:22.451: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-5f14ef50-fba8-4a4c-be34-714842db3787 container projected-secret-volume-test: STEP: delete the pod May 6 23:49:22.498: INFO: Waiting for pod pod-projected-secrets-5f14ef50-fba8-4a4c-be34-714842db3787 to disappear May 6 23:49:22.511: INFO: Pod pod-projected-secrets-5f14ef50-fba8-4a4c-be34-714842db3787 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:49:22.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7749" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2844,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:49:22.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-f060488f-2ba5-42da-a8a4-bc2f424e3c05 STEP: Creating a pod to test consume secrets May 6 23:49:22.645: INFO: Waiting up to 5m0s for pod "pod-secrets-7899a4fe-8f14-437d-be1d-a2f1f5ad1355" in namespace "secrets-8503" to be "success or failure" May 6 23:49:22.649: INFO: Pod "pod-secrets-7899a4fe-8f14-437d-be1d-a2f1f5ad1355": Phase="Pending", Reason="", readiness=false. Elapsed: 3.670131ms May 6 23:49:24.732: INFO: Pod "pod-secrets-7899a4fe-8f14-437d-be1d-a2f1f5ad1355": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086563714s May 6 23:49:26.736: INFO: Pod "pod-secrets-7899a4fe-8f14-437d-be1d-a2f1f5ad1355": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.090655998s STEP: Saw pod success May 6 23:49:26.736: INFO: Pod "pod-secrets-7899a4fe-8f14-437d-be1d-a2f1f5ad1355" satisfied condition "success or failure" May 6 23:49:26.739: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-7899a4fe-8f14-437d-be1d-a2f1f5ad1355 container secret-volume-test: STEP: delete the pod May 6 23:49:26.758: INFO: Waiting for pod pod-secrets-7899a4fe-8f14-437d-be1d-a2f1f5ad1355 to disappear May 6 23:49:26.763: INFO: Pod pod-secrets-7899a4fe-8f14-437d-be1d-a2f1f5ad1355 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:49:26.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8503" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":176,"skipped":2847,"failed":0} SS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:49:26.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2274 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2274;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2274 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2274;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2274.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2274.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2274.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2274.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2274.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2274.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2274.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2274.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2274.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2274.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2274.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2274.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2274.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 206.12.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.12.206_udp@PTR;check="$$(dig +tcp +noall +answer +search 206.12.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.12.206_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2274 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2274;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2274 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2274;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2274.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2274.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2274.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2274.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2274.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2274.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2274.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2274.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2274.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2274.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2274.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2274.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2274.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 206.12.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.12.206_udp@PTR;check="$$(dig +tcp +noall +answer +search 206.12.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.12.206_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 6 23:49:33.137: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:33.140: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:33.142: INFO: Unable to read wheezy_udp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:33.145: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:33.148: INFO: Unable to read wheezy_udp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:33.150: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:33.153: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:33.156: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:33.173: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:33.175: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:33.177: INFO: Unable to read jessie_udp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:33.179: INFO: Unable to read jessie_tcp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:33.182: INFO: Unable to read jessie_udp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:33.184: INFO: Unable to read jessie_tcp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:33.186: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:33.188: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:33.221: INFO: Lookups using dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2274 wheezy_tcp@dns-test-service.dns-2274 wheezy_udp@dns-test-service.dns-2274.svc wheezy_tcp@dns-test-service.dns-2274.svc wheezy_udp@_http._tcp.dns-test-service.dns-2274.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2274.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2274 jessie_tcp@dns-test-service.dns-2274 jessie_udp@dns-test-service.dns-2274.svc jessie_tcp@dns-test-service.dns-2274.svc jessie_udp@_http._tcp.dns-test-service.dns-2274.svc jessie_tcp@_http._tcp.dns-test-service.dns-2274.svc] May 6 23:49:38.226: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:38.230: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:38.234: INFO: Unable to read wheezy_udp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:38.237: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:38.240: INFO: Unable to read wheezy_udp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:38.244: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:38.247: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:38.250: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:38.273: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:38.276: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:38.279: INFO: Unable to read jessie_udp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:38.282: INFO: Unable to read jessie_tcp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:38.285: INFO: Unable to read jessie_udp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:38.288: INFO: Unable to read jessie_tcp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:38.291: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:38.294: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:38.312: INFO: Lookups using dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2274 wheezy_tcp@dns-test-service.dns-2274 wheezy_udp@dns-test-service.dns-2274.svc wheezy_tcp@dns-test-service.dns-2274.svc wheezy_udp@_http._tcp.dns-test-service.dns-2274.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2274.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2274 jessie_tcp@dns-test-service.dns-2274 jessie_udp@dns-test-service.dns-2274.svc jessie_tcp@dns-test-service.dns-2274.svc jessie_udp@_http._tcp.dns-test-service.dns-2274.svc jessie_tcp@_http._tcp.dns-test-service.dns-2274.svc] May 6 23:49:43.226: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:43.230: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:43.233: INFO: Unable to read wheezy_udp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:43.236: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:43.240: INFO: Unable to read wheezy_udp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:43.242: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:43.245: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:43.247: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:43.266: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:43.271: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:43.274: INFO: Unable to read jessie_udp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:43.276: INFO: Unable to read jessie_tcp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:43.278: INFO: Unable to read jessie_udp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:43.280: INFO: Unable to read jessie_tcp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:43.282: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:43.284: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:43.298: INFO: Lookups using dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2274 wheezy_tcp@dns-test-service.dns-2274 wheezy_udp@dns-test-service.dns-2274.svc wheezy_tcp@dns-test-service.dns-2274.svc wheezy_udp@_http._tcp.dns-test-service.dns-2274.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2274.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2274 jessie_tcp@dns-test-service.dns-2274 jessie_udp@dns-test-service.dns-2274.svc jessie_tcp@dns-test-service.dns-2274.svc jessie_udp@_http._tcp.dns-test-service.dns-2274.svc jessie_tcp@_http._tcp.dns-test-service.dns-2274.svc] May 6 23:49:48.227: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:48.231: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:48.235: INFO: Unable to read wheezy_udp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:48.238: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:48.241: INFO: Unable to read wheezy_udp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:48.244: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:48.247: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:48.250: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:48.270: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:48.272: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:48.274: INFO: Unable to read jessie_udp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:48.277: INFO: Unable to read jessie_tcp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:48.279: INFO: Unable to read jessie_udp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:48.281: INFO: Unable to read jessie_tcp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:48.284: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:48.286: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:48.301: INFO: Lookups using dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2274 wheezy_tcp@dns-test-service.dns-2274 wheezy_udp@dns-test-service.dns-2274.svc wheezy_tcp@dns-test-service.dns-2274.svc wheezy_udp@_http._tcp.dns-test-service.dns-2274.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2274.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2274 jessie_tcp@dns-test-service.dns-2274 jessie_udp@dns-test-service.dns-2274.svc jessie_tcp@dns-test-service.dns-2274.svc jessie_udp@_http._tcp.dns-test-service.dns-2274.svc jessie_tcp@_http._tcp.dns-test-service.dns-2274.svc] May 6 23:49:53.225: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:53.229: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:53.232: INFO: Unable to read wheezy_udp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:53.234: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:53.237: INFO: Unable to read wheezy_udp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:53.240: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:53.243: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:53.246: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:53.271: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:53.274: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:53.276: INFO: Unable to read jessie_udp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:53.279: INFO: Unable to read jessie_tcp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:53.281: INFO: Unable to read jessie_udp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:53.283: INFO: Unable to read jessie_tcp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:53.286: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:53.288: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:53.308: INFO: Lookups using dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2274 wheezy_tcp@dns-test-service.dns-2274 wheezy_udp@dns-test-service.dns-2274.svc wheezy_tcp@dns-test-service.dns-2274.svc wheezy_udp@_http._tcp.dns-test-service.dns-2274.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2274.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2274 jessie_tcp@dns-test-service.dns-2274 jessie_udp@dns-test-service.dns-2274.svc jessie_tcp@dns-test-service.dns-2274.svc jessie_udp@_http._tcp.dns-test-service.dns-2274.svc jessie_tcp@_http._tcp.dns-test-service.dns-2274.svc] May 6 23:49:58.239: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:58.256: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:58.259: INFO: Unable to read wheezy_udp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:58.261: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:58.264: INFO: Unable to read wheezy_udp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:58.267: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:58.269: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:58.272: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:58.302: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:58.304: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:58.306: INFO: Unable to read jessie_udp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:58.308: INFO: Unable to read jessie_tcp@dns-test-service.dns-2274 from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:58.310: INFO: Unable to read jessie_udp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:58.312: INFO: Unable to read jessie_tcp@dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:58.314: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:58.316: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2274.svc from pod dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77: the server could not find the requested resource (get pods dns-test-50748b53-8134-4df2-94fc-f8856471af77) May 6 23:49:58.334: INFO: Lookups using dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2274 wheezy_tcp@dns-test-service.dns-2274 wheezy_udp@dns-test-service.dns-2274.svc wheezy_tcp@dns-test-service.dns-2274.svc wheezy_udp@_http._tcp.dns-test-service.dns-2274.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2274.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2274 jessie_tcp@dns-test-service.dns-2274 jessie_udp@dns-test-service.dns-2274.svc jessie_tcp@dns-test-service.dns-2274.svc jessie_udp@_http._tcp.dns-test-service.dns-2274.svc jessie_tcp@_http._tcp.dns-test-service.dns-2274.svc] May 6 23:50:03.300: INFO: DNS probes using dns-2274/dns-test-50748b53-8134-4df2-94fc-f8856471af77 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:50:04.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2274" for this suite. • [SLOW TEST:37.799 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":177,"skipped":2849,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:50:04.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 23:50:04.628: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b71601b7-e734-406a-b01c-3b49c3f0fdc7" in namespace "projected-1982" to be "success or failure" May 6 23:50:04.657: INFO: Pod "downwardapi-volume-b71601b7-e734-406a-b01c-3b49c3f0fdc7": Phase="Pending", Reason="", readiness=false. Elapsed: 28.501996ms May 6 23:50:06.747: INFO: Pod "downwardapi-volume-b71601b7-e734-406a-b01c-3b49c3f0fdc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118905751s May 6 23:50:08.752: INFO: Pod "downwardapi-volume-b71601b7-e734-406a-b01c-3b49c3f0fdc7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.123013813s May 6 23:50:10.756: INFO: Pod "downwardapi-volume-b71601b7-e734-406a-b01c-3b49c3f0fdc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.127398039s STEP: Saw pod success May 6 23:50:10.756: INFO: Pod "downwardapi-volume-b71601b7-e734-406a-b01c-3b49c3f0fdc7" satisfied condition "success or failure" May 6 23:50:10.759: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-b71601b7-e734-406a-b01c-3b49c3f0fdc7 container client-container: STEP: delete the pod May 6 23:50:10.795: INFO: Waiting for pod downwardapi-volume-b71601b7-e734-406a-b01c-3b49c3f0fdc7 to disappear May 6 23:50:10.800: INFO: Pod downwardapi-volume-b71601b7-e734-406a-b01c-3b49c3f0fdc7 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:50:10.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1982" for this suite. • [SLOW TEST:6.238 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2857,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:50:10.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-1590d172-40ef-4248-a2e4-0ca225087e04 STEP: Creating a pod to test consume secrets May 6 23:50:10.934: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-af5de837-5047-4952-b043-7c6eb04f9dd9" in namespace "projected-7935" to be "success or failure" May 6 23:50:10.938: INFO: Pod "pod-projected-secrets-af5de837-5047-4952-b043-7c6eb04f9dd9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.980984ms May 6 23:50:12.956: INFO: Pod "pod-projected-secrets-af5de837-5047-4952-b043-7c6eb04f9dd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022454893s May 6 23:50:14.959: INFO: Pod "pod-projected-secrets-af5de837-5047-4952-b043-7c6eb04f9dd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025327683s STEP: Saw pod success May 6 23:50:14.959: INFO: Pod "pod-projected-secrets-af5de837-5047-4952-b043-7c6eb04f9dd9" satisfied condition "success or failure" May 6 23:50:14.962: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-af5de837-5047-4952-b043-7c6eb04f9dd9 container secret-volume-test: STEP: delete the pod May 6 23:50:14.999: INFO: Waiting for pod pod-projected-secrets-af5de837-5047-4952-b043-7c6eb04f9dd9 to disappear May 6 23:50:15.016: INFO: Pod pod-projected-secrets-af5de837-5047-4952-b043-7c6eb04f9dd9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:50:15.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7935" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":179,"skipped":2860,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:50:15.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 6 23:50:15.187: INFO: Waiting up to 5m0s for pod "downward-api-8e5f4a0f-ff2d-4458-a971-e9393b4c0b7f" in namespace "downward-api-7390" to be "success or failure" May 6 23:50:15.195: INFO: Pod "downward-api-8e5f4a0f-ff2d-4458-a971-e9393b4c0b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.363015ms May 6 23:50:17.200: INFO: Pod "downward-api-8e5f4a0f-ff2d-4458-a971-e9393b4c0b7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012562837s May 6 23:50:19.203: INFO: Pod "downward-api-8e5f4a0f-ff2d-4458-a971-e9393b4c0b7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016432929s STEP: Saw pod success May 6 23:50:19.204: INFO: Pod "downward-api-8e5f4a0f-ff2d-4458-a971-e9393b4c0b7f" satisfied condition "success or failure" May 6 23:50:19.206: INFO: Trying to get logs from node jerma-worker pod downward-api-8e5f4a0f-ff2d-4458-a971-e9393b4c0b7f container dapi-container: STEP: delete the pod May 6 23:50:19.300: INFO: Waiting for pod downward-api-8e5f4a0f-ff2d-4458-a971-e9393b4c0b7f to disappear May 6 23:50:19.396: INFO: Pod downward-api-8e5f4a0f-ff2d-4458-a971-e9393b4c0b7f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:50:19.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7390" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2877,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:50:19.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:50:19.497: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:50:20.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2658" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":181,"skipped":2878,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:50:20.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:50:20.905: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 6 23:50:23.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7487 create -f -' May 6 23:50:26.938: INFO: stderr: "" May 6 23:50:26.938: INFO: stdout: "e2e-test-crd-publish-openapi-4386-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 6 23:50:26.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7487 delete e2e-test-crd-publish-openapi-4386-crds test-cr' May 6 23:50:27.058: INFO: stderr: "" May 6 23:50:27.058: INFO: stdout: "e2e-test-crd-publish-openapi-4386-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 6 23:50:27.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7487 apply -f -' May 6 23:50:27.361: INFO: stderr: "" May 6 23:50:27.361: INFO: stdout: "e2e-test-crd-publish-openapi-4386-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 6 23:50:27.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7487 delete e2e-test-crd-publish-openapi-4386-crds test-cr' May 6 23:50:27.475: INFO: stderr: "" May 6 23:50:27.475: INFO: stdout: "e2e-test-crd-publish-openapi-4386-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 6 23:50:27.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4386-crds' May 6 23:50:27.757: INFO: stderr: "" May 6 23:50:27.757: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4386-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:50:30.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7487" for this suite. • [SLOW TEST:9.829 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":182,"skipped":2907,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:50:30.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 6 23:50:36.028: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:50:36.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6515" for this suite. • [SLOW TEST:5.451 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":2971,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:50:36.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 23:50:36.181: INFO: Waiting up to 5m0s for pod "downwardapi-volume-832e16b2-973c-49bc-a047-725699436b3d" in namespace "projected-9068" to be "success or failure" May 6 23:50:36.184: INFO: Pod "downwardapi-volume-832e16b2-973c-49bc-a047-725699436b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.699857ms May 6 23:50:38.293: INFO: Pod "downwardapi-volume-832e16b2-973c-49bc-a047-725699436b3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112020516s May 6 23:50:40.309: INFO: Pod "downwardapi-volume-832e16b2-973c-49bc-a047-725699436b3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.128148212s STEP: Saw pod success May 6 23:50:40.309: INFO: Pod "downwardapi-volume-832e16b2-973c-49bc-a047-725699436b3d" satisfied condition "success or failure" May 6 23:50:40.311: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-832e16b2-973c-49bc-a047-725699436b3d container client-container: STEP: delete the pod May 6 23:50:40.342: INFO: Waiting for pod downwardapi-volume-832e16b2-973c-49bc-a047-725699436b3d to disappear May 6 23:50:40.676: INFO: Pod downwardapi-volume-832e16b2-973c-49bc-a047-725699436b3d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:50:40.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9068" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":2990,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:50:40.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-255a6d94-8bd1-4438-af02-540425913f81 in namespace container-probe-443 May 6 23:50:44.900: INFO: Started pod liveness-255a6d94-8bd1-4438-af02-540425913f81 in namespace container-probe-443 STEP: checking the pod's current state and verifying that restartCount is present May 6 23:50:44.902: INFO: Initial restart count of pod liveness-255a6d94-8bd1-4438-af02-540425913f81 is 0 May 6 23:50:59.467: INFO: Restart count of pod container-probe-443/liveness-255a6d94-8bd1-4438-af02-540425913f81 is now 1 (14.564581344s elapsed) May 6 23:51:19.538: INFO: Restart count of pod container-probe-443/liveness-255a6d94-8bd1-4438-af02-540425913f81 is now 2 (34.635472769s elapsed) May 6 23:51:39.595: INFO: Restart count of pod container-probe-443/liveness-255a6d94-8bd1-4438-af02-540425913f81 is now 3 (54.692899066s elapsed) May 6 23:51:59.931: INFO: Restart count of pod container-probe-443/liveness-255a6d94-8bd1-4438-af02-540425913f81 is now 4 (1m15.029089919s elapsed) May 6 23:53:10.247: INFO: Restart count of pod container-probe-443/liveness-255a6d94-8bd1-4438-af02-540425913f81 is now 5 (2m25.34448438s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:53:10.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-443" for this suite. • [SLOW TEST:150.273 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3002,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:53:11.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-a143b59d-ad74-4e70-afe9-c4cebda25e45 STEP: Creating a pod to test consume secrets May 6 23:53:11.866: INFO: Waiting up to 5m0s for pod "pod-secrets-315ee0a3-492f-4193-944e-83d3ea1b1dd4" in namespace "secrets-198" to be "success or failure" May 6 23:53:11.905: INFO: Pod "pod-secrets-315ee0a3-492f-4193-944e-83d3ea1b1dd4": Phase="Pending", Reason="", readiness=false. Elapsed: 38.606715ms May 6 23:53:14.193: INFO: Pod "pod-secrets-315ee0a3-492f-4193-944e-83d3ea1b1dd4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.327153152s May 6 23:53:16.431: INFO: Pod "pod-secrets-315ee0a3-492f-4193-944e-83d3ea1b1dd4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.565079854s May 6 23:53:18.435: INFO: Pod "pod-secrets-315ee0a3-492f-4193-944e-83d3ea1b1dd4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.56883812s STEP: Saw pod success May 6 23:53:18.435: INFO: Pod "pod-secrets-315ee0a3-492f-4193-944e-83d3ea1b1dd4" satisfied condition "success or failure" May 6 23:53:18.438: INFO: Trying to get logs from node jerma-worker pod pod-secrets-315ee0a3-492f-4193-944e-83d3ea1b1dd4 container secret-env-test: STEP: delete the pod May 6 23:53:18.612: INFO: Waiting for pod pod-secrets-315ee0a3-492f-4193-944e-83d3ea1b1dd4 to disappear May 6 23:53:18.631: INFO: Pod pod-secrets-315ee0a3-492f-4193-944e-83d3ea1b1dd4 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:53:18.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-198" for this suite. • [SLOW TEST:7.611 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3011,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:53:18.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 6 23:53:18.872: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:53:25.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5631" for this suite. • [SLOW TEST:6.712 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":187,"skipped":3037,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:53:25.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7589 STEP: creating a selector STEP: Creating the service pods in kubernetes May 6 23:53:25.423: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 6 23:53:45.784: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.170:8080/dial?request=hostname&protocol=udp&host=10.244.1.169&port=8081&tries=1'] Namespace:pod-network-test-7589 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 23:53:45.785: INFO: >>> kubeConfig: /root/.kube/config I0506 23:53:45.818032 6 log.go:172] (0xc0027c9e40) (0xc002874b40) Create stream I0506 23:53:45.818063 6 log.go:172] (0xc0027c9e40) (0xc002874b40) Stream added, broadcasting: 1 I0506 23:53:45.819683 6 log.go:172] (0xc0027c9e40) Reply frame received for 1 I0506 23:53:45.819730 6 log.go:172] (0xc0027c9e40) (0xc0023d00a0) Create stream I0506 23:53:45.819744 6 log.go:172] (0xc0027c9e40) (0xc0023d00a0) Stream added, broadcasting: 3 I0506 23:53:45.820630 6 log.go:172] (0xc0027c9e40) Reply frame received for 3 I0506 23:53:45.820669 6 log.go:172] (0xc0027c9e40) (0xc0028a70e0) Create stream I0506 23:53:45.820679 6 log.go:172] (0xc0027c9e40) (0xc0028a70e0) Stream added, broadcasting: 5 I0506 23:53:45.821760 6 log.go:172] (0xc0027c9e40) Reply frame received for 5 I0506 23:53:45.878724 6 log.go:172] (0xc0027c9e40) Data frame received for 3 I0506 23:53:45.878759 6 log.go:172] (0xc0023d00a0) (3) Data frame handling I0506 23:53:45.878794 6 log.go:172] (0xc0023d00a0) (3) Data frame sent I0506 23:53:45.878896 6 log.go:172] (0xc0027c9e40) Data frame received for 3 I0506 23:53:45.878917 6 log.go:172] (0xc0023d00a0) (3) Data frame handling I0506 23:53:45.879078 6 log.go:172] (0xc0027c9e40) Data frame received for 5 I0506 23:53:45.879101 6 log.go:172] (0xc0028a70e0) (5) Data frame handling I0506 23:53:45.880357 6 log.go:172] (0xc0027c9e40) Data frame received for 1 I0506 23:53:45.880403 6 log.go:172] (0xc002874b40) (1) Data frame handling I0506 23:53:45.880450 6 log.go:172] (0xc002874b40) (1) Data frame sent I0506 23:53:45.880482 6 log.go:172] (0xc0027c9e40) (0xc002874b40) Stream removed, broadcasting: 1 I0506 23:53:45.880506 6 log.go:172] (0xc0027c9e40) Go away received I0506 23:53:45.880620 6 log.go:172] (0xc0027c9e40) (0xc002874b40) Stream removed, broadcasting: 1 I0506 23:53:45.880651 6 log.go:172] (0xc0027c9e40) (0xc0023d00a0) Stream removed, broadcasting: 3 I0506 23:53:45.880678 6 log.go:172] (0xc0027c9e40) (0xc0028a70e0) Stream removed, broadcasting: 5 May 6 23:53:45.880: INFO: Waiting for responses: map[] May 6 23:53:45.889: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.170:8080/dial?request=hostname&protocol=udp&host=10.244.2.67&port=8081&tries=1'] Namespace:pod-network-test-7589 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 6 23:53:45.889: INFO: >>> kubeConfig: /root/.kube/config I0506 23:53:45.913578 6 log.go:172] (0xc001d12630) (0xc002874e60) Create stream I0506 23:53:45.913622 6 log.go:172] (0xc001d12630) (0xc002874e60) Stream added, broadcasting: 1 I0506 23:53:45.915317 6 log.go:172] (0xc001d12630) Reply frame received for 1 I0506 23:53:45.915369 6 log.go:172] (0xc001d12630) (0xc002874f00) Create stream I0506 23:53:45.915391 6 log.go:172] (0xc001d12630) (0xc002874f00) Stream added, broadcasting: 3 I0506 23:53:45.916402 6 log.go:172] (0xc001d12630) Reply frame received for 3 I0506 23:53:45.916437 6 log.go:172] (0xc001d12630) (0xc002874fa0) Create stream I0506 23:53:45.916458 6 log.go:172] (0xc001d12630) (0xc002874fa0) Stream added, broadcasting: 5 I0506 23:53:45.917577 6 log.go:172] (0xc001d12630) Reply frame received for 5 I0506 23:53:45.979520 6 log.go:172] (0xc001d12630) Data frame received for 3 I0506 23:53:45.979548 6 log.go:172] (0xc002874f00) (3) Data frame handling I0506 23:53:45.979565 6 log.go:172] (0xc002874f00) (3) Data frame sent I0506 23:53:45.980316 6 log.go:172] (0xc001d12630) Data frame received for 5 I0506 23:53:45.980351 6 log.go:172] (0xc002874fa0) (5) Data frame handling I0506 23:53:45.980374 6 log.go:172] (0xc001d12630) Data frame received for 3 I0506 23:53:45.980381 6 log.go:172] (0xc002874f00) (3) Data frame handling I0506 23:53:45.982428 6 log.go:172] (0xc001d12630) Data frame received for 1 I0506 23:53:45.982444 6 log.go:172] (0xc002874e60) (1) Data frame handling I0506 23:53:45.982452 6 log.go:172] (0xc002874e60) (1) Data frame sent I0506 23:53:45.982461 6 log.go:172] (0xc001d12630) (0xc002874e60) Stream removed, broadcasting: 1 I0506 23:53:45.982535 6 log.go:172] (0xc001d12630) (0xc002874e60) Stream removed, broadcasting: 1 I0506 23:53:45.982578 6 log.go:172] (0xc001d12630) (0xc002874f00) Stream removed, broadcasting: 3 I0506 23:53:45.982615 6 log.go:172] (0xc001d12630) (0xc002874fa0) Stream removed, broadcasting: 5 I0506 23:53:45.982691 6 log.go:172] (0xc001d12630) Go away received May 6 23:53:45.982: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:53:45.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7589" for this suite. • [SLOW TEST:20.648 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3051,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:53:46.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:53:50.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1145" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":189,"skipped":3084,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:53:50.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 23:53:50.882: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71214469-a48e-488c-aaa9-dbe50149b162" in namespace "downward-api-559" to be "success or failure" May 6 23:53:51.107: INFO: Pod "downwardapi-volume-71214469-a48e-488c-aaa9-dbe50149b162": Phase="Pending", Reason="", readiness=false. Elapsed: 224.398954ms May 6 23:53:53.259: INFO: Pod "downwardapi-volume-71214469-a48e-488c-aaa9-dbe50149b162": Phase="Pending", Reason="", readiness=false. Elapsed: 2.376132541s May 6 23:53:55.407: INFO: Pod "downwardapi-volume-71214469-a48e-488c-aaa9-dbe50149b162": Phase="Pending", Reason="", readiness=false. Elapsed: 4.524751625s May 6 23:53:57.692: INFO: Pod "downwardapi-volume-71214469-a48e-488c-aaa9-dbe50149b162": Phase="Running", Reason="", readiness=true. Elapsed: 6.80932939s May 6 23:53:59.696: INFO: Pod "downwardapi-volume-71214469-a48e-488c-aaa9-dbe50149b162": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.813638767s STEP: Saw pod success May 6 23:53:59.696: INFO: Pod "downwardapi-volume-71214469-a48e-488c-aaa9-dbe50149b162" satisfied condition "success or failure" May 6 23:53:59.699: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-71214469-a48e-488c-aaa9-dbe50149b162 container client-container: STEP: delete the pod May 6 23:54:00.053: INFO: Waiting for pod downwardapi-volume-71214469-a48e-488c-aaa9-dbe50149b162 to disappear May 6 23:54:00.080: INFO: Pod downwardapi-volume-71214469-a48e-488c-aaa9-dbe50149b162 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:54:00.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-559" for this suite. • [SLOW TEST:9.765 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":3087,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:54:00.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod May 6 23:54:00.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-6483 -- logs-generator --log-lines-total 100 --run-duration 20s' May 6 23:54:00.531: INFO: stderr: "" May 6 23:54:00.531: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. May 6 23:54:00.531: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 6 23:54:00.531: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-6483" to be "running and ready, or succeeded" May 6 23:54:00.542: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.542133ms May 6 23:54:02.727: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.195373541s May 6 23:54:04.731: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.199430409s May 6 23:54:06.738: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.20710553s May 6 23:54:06.739: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 6 23:54:06.739: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 6 23:54:06.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6483' May 6 23:54:06.839: INFO: stderr: "" May 6 23:54:06.839: INFO: stdout: "I0506 23:54:03.884175 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/mb6 571\nI0506 23:54:04.084340 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/g6m 385\nI0506 23:54:04.284356 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/ktn 288\nI0506 23:54:04.484387 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/2nh 467\nI0506 23:54:04.684356 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/c9p 302\nI0506 23:54:04.884337 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/47jt 279\nI0506 23:54:05.084376 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/mdjr 505\nI0506 23:54:05.284332 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/whw 595\nI0506 23:54:05.484322 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/jb9 459\nI0506 23:54:05.684347 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/n8r 365\nI0506 23:54:05.884372 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/tf4 214\nI0506 23:54:06.084373 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/hvkf 579\nI0506 23:54:06.284396 1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/hmz5 354\nI0506 23:54:06.484355 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/lszx 512\nI0506 23:54:06.684399 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/k4zg 402\n" STEP: limiting log lines May 6 23:54:06.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6483 --tail=1' May 6 23:54:06.939: INFO: stderr: "" May 6 23:54:06.939: INFO: stdout: "I0506 23:54:06.884318 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/8sfj 209\n" May 6 23:54:06.939: INFO: got output "I0506 23:54:06.884318 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/8sfj 209\n" STEP: limiting log bytes May 6 23:54:06.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6483 --limit-bytes=1' May 6 23:54:07.047: INFO: stderr: "" May 6 23:54:07.047: INFO: stdout: "I" May 6 23:54:07.047: INFO: got output "I" STEP: exposing timestamps May 6 23:54:07.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6483 --tail=1 --timestamps' May 6 23:54:07.144: INFO: stderr: "" May 6 23:54:07.144: INFO: stdout: "2020-05-06T23:54:07.084564766Z I0506 23:54:07.084376 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/g7qr 512\n" May 6 23:54:07.144: INFO: got output "2020-05-06T23:54:07.084564766Z I0506 23:54:07.084376 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/g7qr 512\n" STEP: restricting to a time range May 6 23:54:09.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6483 --since=1s' May 6 23:54:09.741: INFO: stderr: "" May 6 23:54:09.741: INFO: stdout: "I0506 23:54:08.884340 1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/dkc 298\nI0506 23:54:09.084354 1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/p4xg 492\nI0506 23:54:09.284379 1 logs_generator.go:76] 27 GET /api/v1/namespaces/ns/pods/vdw 403\nI0506 23:54:09.484339 1 logs_generator.go:76] 28 POST /api/v1/namespaces/ns/pods/tdqf 351\nI0506 23:54:09.684344 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/default/pods/898 391\n" May 6 23:54:09.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-6483 --since=24h' May 6 23:54:09.854: INFO: stderr: "" May 6 23:54:09.854: INFO: stdout: "I0506 23:54:03.884175 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/mb6 571\nI0506 23:54:04.084340 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/g6m 385\nI0506 23:54:04.284356 1 logs_generator.go:76] 2 POST /api/v1/namespaces/kube-system/pods/ktn 288\nI0506 23:54:04.484387 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/2nh 467\nI0506 23:54:04.684356 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/c9p 302\nI0506 23:54:04.884337 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/47jt 279\nI0506 23:54:05.084376 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/mdjr 505\nI0506 23:54:05.284332 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/whw 595\nI0506 23:54:05.484322 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/jb9 459\nI0506 23:54:05.684347 1 logs_generator.go:76] 9 GET /api/v1/namespaces/ns/pods/n8r 365\nI0506 23:54:05.884372 1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/tf4 214\nI0506 23:54:06.084373 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/hvkf 579\nI0506 23:54:06.284396 1 logs_generator.go:76] 12 POST /api/v1/namespaces/ns/pods/hmz5 354\nI0506 23:54:06.484355 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/lszx 512\nI0506 23:54:06.684399 1 logs_generator.go:76] 14 GET /api/v1/namespaces/kube-system/pods/k4zg 402\nI0506 23:54:06.884318 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/8sfj 209\nI0506 23:54:07.084376 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/g7qr 512\nI0506 23:54:07.284346 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/w7x 516\nI0506 23:54:07.484364 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/ns/pods/j5m 313\nI0506 23:54:07.684350 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/mf5r 499\nI0506 23:54:07.884340 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/k7jx 258\nI0506 23:54:08.084342 1 logs_generator.go:76] 21 GET /api/v1/namespaces/kube-system/pods/5jz 407\nI0506 23:54:08.284345 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/wsz 397\nI0506 23:54:08.484418 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/k72 559\nI0506 23:54:08.684438 1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/sjx5 350\nI0506 23:54:08.884340 1 logs_generator.go:76] 25 GET /api/v1/namespaces/kube-system/pods/dkc 298\nI0506 23:54:09.084354 1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/p4xg 492\nI0506 23:54:09.284379 1 logs_generator.go:76] 27 GET /api/v1/namespaces/ns/pods/vdw 403\nI0506 23:54:09.484339 1 logs_generator.go:76] 28 POST /api/v1/namespaces/ns/pods/tdqf 351\nI0506 23:54:09.684344 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/default/pods/898 391\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 May 6 23:54:09.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-6483' May 6 23:54:19.255: INFO: stderr: "" May 6 23:54:19.255: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:54:19.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6483" for this suite. • [SLOW TEST:19.174 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":191,"skipped":3095,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:54:19.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 6 23:54:19.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1473' May 6 23:54:19.461: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 6 23:54:19.461: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 May 6 23:54:19.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-1473' May 6 23:54:19.640: INFO: stderr: "" May 6 23:54:19.640: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:54:19.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1473" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":192,"skipped":3142,"failed":0} ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:54:19.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:54:25.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7910" for this suite. • [SLOW TEST:6.163 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":193,"skipped":3142,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:54:25.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info May 6 23:54:25.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 6 23:54:25.971: INFO: stderr: "" May 6 23:54:25.971: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:54:25.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1255" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":194,"skipped":3182,"failed":0} SSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:54:25.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:54:26.060: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:54:30.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1005" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3186,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:54:30.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 23:54:31.824: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 23:54:33.904: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406071, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406071, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406071, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406071, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:54:35.913: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406071, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406071, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406071, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406071, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 23:54:39.006: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:54:39.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:54:40.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4257" for this suite. STEP: Destroying namespace "webhook-4257-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.141 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":196,"skipped":3186,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:54:40.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:54:40.383: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:54:44.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5217" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3191,"failed":0} SSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:54:44.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:54:45.004: INFO: Waiting up to 5m0s for pod "busybox-user-65534-f4a1ef3f-b467-4356-a298-25042de1f0b8" in namespace "security-context-test-6917" to be "success or failure" May 6 23:54:45.158: INFO: Pod "busybox-user-65534-f4a1ef3f-b467-4356-a298-25042de1f0b8": Phase="Pending", Reason="", readiness=false. Elapsed: 153.956963ms May 6 23:54:47.162: INFO: Pod "busybox-user-65534-f4a1ef3f-b467-4356-a298-25042de1f0b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158497277s May 6 23:54:49.166: INFO: Pod "busybox-user-65534-f4a1ef3f-b467-4356-a298-25042de1f0b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.162427056s May 6 23:54:49.166: INFO: Pod "busybox-user-65534-f4a1ef3f-b467-4356-a298-25042de1f0b8" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:54:49.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-6917" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3196,"failed":0} ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:54:49.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:54:49.915: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 6 23:54:49.933: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:54:49.961: INFO: Number of nodes with available pods: 0 May 6 23:54:49.961: INFO: Node jerma-worker is running more than one daemon pod May 6 23:54:50.967: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:54:50.970: INFO: Number of nodes with available pods: 0 May 6 23:54:50.970: INFO: Node jerma-worker is running more than one daemon pod May 6 23:54:52.364: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:54:52.789: INFO: Number of nodes with available pods: 0 May 6 23:54:52.789: INFO: Node jerma-worker is running more than one daemon pod May 6 23:54:52.992: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:54:53.011: INFO: Number of nodes with available pods: 0 May 6 23:54:53.011: INFO: Node jerma-worker is running more than one daemon pod May 6 23:54:53.966: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:54:53.970: INFO: Number of nodes with available pods: 0 May 6 23:54:53.970: INFO: Node jerma-worker is running more than one daemon pod May 6 23:54:55.101: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:54:55.104: INFO: Number of nodes with available pods: 0 May 6 23:54:55.105: INFO: Node jerma-worker is running more than one daemon pod May 6 23:54:55.966: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:54:55.969: INFO: Number of nodes with available pods: 1 May 6 23:54:55.969: INFO: Node jerma-worker is running more than one daemon pod May 6 23:54:56.967: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:54:56.972: INFO: Number of nodes with available pods: 2 May 6 23:54:56.972: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 6 23:54:57.021: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:54:57.021: INFO: Wrong image for pod: daemon-set-r7vxl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:54:57.054: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:54:58.059: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:54:58.059: INFO: Wrong image for pod: daemon-set-r7vxl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:54:58.063: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:54:59.058: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:54:59.058: INFO: Wrong image for pod: daemon-set-r7vxl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:54:59.061: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:00.059: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:00.059: INFO: Wrong image for pod: daemon-set-r7vxl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:00.059: INFO: Pod daemon-set-r7vxl is not available May 6 23:55:00.063: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:01.058: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:01.058: INFO: Wrong image for pod: daemon-set-r7vxl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:01.058: INFO: Pod daemon-set-r7vxl is not available May 6 23:55:01.061: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:02.059: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:02.059: INFO: Wrong image for pod: daemon-set-r7vxl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:02.059: INFO: Pod daemon-set-r7vxl is not available May 6 23:55:02.064: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:03.059: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:03.059: INFO: Wrong image for pod: daemon-set-r7vxl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:03.059: INFO: Pod daemon-set-r7vxl is not available May 6 23:55:03.063: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:04.059: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:04.059: INFO: Wrong image for pod: daemon-set-r7vxl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:04.059: INFO: Pod daemon-set-r7vxl is not available May 6 23:55:04.064: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:05.059: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:05.059: INFO: Wrong image for pod: daemon-set-r7vxl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:05.059: INFO: Pod daemon-set-r7vxl is not available May 6 23:55:05.063: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:06.059: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:06.059: INFO: Wrong image for pod: daemon-set-r7vxl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:06.059: INFO: Pod daemon-set-r7vxl is not available May 6 23:55:06.063: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:07.059: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:07.059: INFO: Wrong image for pod: daemon-set-r7vxl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:07.059: INFO: Pod daemon-set-r7vxl is not available May 6 23:55:07.063: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:08.059: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:08.059: INFO: Wrong image for pod: daemon-set-r7vxl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:08.059: INFO: Pod daemon-set-r7vxl is not available May 6 23:55:08.064: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:09.059: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:09.059: INFO: Wrong image for pod: daemon-set-r7vxl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:09.059: INFO: Pod daemon-set-r7vxl is not available May 6 23:55:09.063: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:10.059: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:10.059: INFO: Pod daemon-set-hhp6m is not available May 6 23:55:10.063: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:11.099: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:11.099: INFO: Pod daemon-set-hhp6m is not available May 6 23:55:11.102: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:12.059: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:12.059: INFO: Pod daemon-set-hhp6m is not available May 6 23:55:12.062: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:13.059: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:13.063: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:14.093: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:14.096: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:15.058: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:15.058: INFO: Pod daemon-set-9rr6c is not available May 6 23:55:15.062: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:16.058: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:16.058: INFO: Pod daemon-set-9rr6c is not available May 6 23:55:16.061: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:17.059: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:17.059: INFO: Pod daemon-set-9rr6c is not available May 6 23:55:17.064: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:18.058: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:18.058: INFO: Pod daemon-set-9rr6c is not available May 6 23:55:18.061: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:19.059: INFO: Wrong image for pod: daemon-set-9rr6c. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 6 23:55:19.059: INFO: Pod daemon-set-9rr6c is not available May 6 23:55:19.064: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:20.059: INFO: Pod daemon-set-x56fj is not available May 6 23:55:20.063: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 6 23:55:20.088: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:20.090: INFO: Number of nodes with available pods: 1 May 6 23:55:20.090: INFO: Node jerma-worker2 is running more than one daemon pod May 6 23:55:21.100: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:21.104: INFO: Number of nodes with available pods: 1 May 6 23:55:21.104: INFO: Node jerma-worker2 is running more than one daemon pod May 6 23:55:22.214: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:22.269: INFO: Number of nodes with available pods: 1 May 6 23:55:22.269: INFO: Node jerma-worker2 is running more than one daemon pod May 6 23:55:23.094: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:23.097: INFO: Number of nodes with available pods: 1 May 6 23:55:23.097: INFO: Node jerma-worker2 is running more than one daemon pod May 6 23:55:24.097: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:24.100: INFO: Number of nodes with available pods: 1 May 6 23:55:24.100: INFO: Node jerma-worker2 is running more than one daemon pod May 6 23:55:25.095: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 6 23:55:25.099: INFO: Number of nodes with available pods: 2 May 6 23:55:25.099: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3475, will wait for the garbage collector to delete the pods May 6 23:55:25.170: INFO: Deleting DaemonSet.extensions daemon-set took: 5.919949ms May 6 23:55:25.671: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.200146ms May 6 23:55:39.274: INFO: Number of nodes with available pods: 0 May 6 23:55:39.274: INFO: Number of running nodes: 0, number of available pods: 0 May 6 23:55:39.277: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3475/daemonsets","resourceVersion":"14036362"},"items":null} May 6 23:55:39.279: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3475/pods","resourceVersion":"14036362"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:55:39.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3475" for this suite. • [SLOW TEST:50.118 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":199,"skipped":3196,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:55:39.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-ae53cdab-5993-4797-b890-58adea01626d STEP: Creating a pod to test consume configMaps May 6 23:55:39.408: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-eaa34232-1cec-4354-a21d-22fff7e4e0b7" in namespace "projected-395" to be "success or failure" May 6 23:55:39.555: INFO: Pod "pod-projected-configmaps-eaa34232-1cec-4354-a21d-22fff7e4e0b7": Phase="Pending", Reason="", readiness=false. Elapsed: 147.04135ms May 6 23:55:41.770: INFO: Pod "pod-projected-configmaps-eaa34232-1cec-4354-a21d-22fff7e4e0b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.361979925s May 6 23:55:43.775: INFO: Pod "pod-projected-configmaps-eaa34232-1cec-4354-a21d-22fff7e4e0b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.366685637s STEP: Saw pod success May 6 23:55:43.775: INFO: Pod "pod-projected-configmaps-eaa34232-1cec-4354-a21d-22fff7e4e0b7" satisfied condition "success or failure" May 6 23:55:43.778: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-eaa34232-1cec-4354-a21d-22fff7e4e0b7 container projected-configmap-volume-test: STEP: delete the pod May 6 23:55:43.814: INFO: Waiting for pod pod-projected-configmaps-eaa34232-1cec-4354-a21d-22fff7e4e0b7 to disappear May 6 23:55:43.817: INFO: Pod pod-projected-configmaps-eaa34232-1cec-4354-a21d-22fff7e4e0b7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:55:43.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-395" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":200,"skipped":3197,"failed":0} S ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:55:43.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:56:19.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5990" for this suite. STEP: Destroying namespace "nsdeletetest-1135" for this suite. May 6 23:56:19.214: INFO: Namespace nsdeletetest-1135 was already deleted STEP: Destroying namespace "nsdeletetest-140" for this suite. • [SLOW TEST:35.396 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":201,"skipped":3198,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:56:19.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 23:56:19.595: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d8971f9-5701-460f-b1b8-056177bbdad9" in namespace "projected-7696" to be "success or failure" May 6 23:56:19.609: INFO: Pod "downwardapi-volume-9d8971f9-5701-460f-b1b8-056177bbdad9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.036882ms May 6 23:56:21.617: INFO: Pod "downwardapi-volume-9d8971f9-5701-460f-b1b8-056177bbdad9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022466139s May 6 23:56:23.621: INFO: Pod "downwardapi-volume-9d8971f9-5701-460f-b1b8-056177bbdad9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026417218s STEP: Saw pod success May 6 23:56:23.621: INFO: Pod "downwardapi-volume-9d8971f9-5701-460f-b1b8-056177bbdad9" satisfied condition "success or failure" May 6 23:56:23.624: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-9d8971f9-5701-460f-b1b8-056177bbdad9 container client-container: STEP: delete the pod May 6 23:56:24.022: INFO: Waiting for pod downwardapi-volume-9d8971f9-5701-460f-b1b8-056177bbdad9 to disappear May 6 23:56:24.178: INFO: Pod downwardapi-volume-9d8971f9-5701-460f-b1b8-056177bbdad9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:56:24.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7696" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3211,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:56:24.221: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 6 23:56:24.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9281' May 6 23:56:24.398: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 6 23:56:24.398: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 May 6 23:56:26.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-9281' May 6 23:56:26.884: INFO: stderr: "" May 6 23:56:26.884: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:56:26.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9281" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":203,"skipped":3211,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:56:26.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-f42989df-ed54-4599-a470-91fbe39022de STEP: Creating a pod to test consume configMaps May 6 23:56:27.104: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-277af005-a094-4e10-a545-305dce744cfb" in namespace "projected-1603" to be "success or failure" May 6 23:56:27.202: INFO: Pod "pod-projected-configmaps-277af005-a094-4e10-a545-305dce744cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 98.401878ms May 6 23:56:29.207: INFO: Pod "pod-projected-configmaps-277af005-a094-4e10-a545-305dce744cfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.103197719s May 6 23:56:31.212: INFO: Pod "pod-projected-configmaps-277af005-a094-4e10-a545-305dce744cfb": Phase="Running", Reason="", readiness=true. Elapsed: 4.108504801s May 6 23:56:33.218: INFO: Pod "pod-projected-configmaps-277af005-a094-4e10-a545-305dce744cfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.11379105s STEP: Saw pod success May 6 23:56:33.218: INFO: Pod "pod-projected-configmaps-277af005-a094-4e10-a545-305dce744cfb" satisfied condition "success or failure" May 6 23:56:33.247: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-277af005-a094-4e10-a545-305dce744cfb container projected-configmap-volume-test: STEP: delete the pod May 6 23:56:33.293: INFO: Waiting for pod pod-projected-configmaps-277af005-a094-4e10-a545-305dce744cfb to disappear May 6 23:56:33.300: INFO: Pod pod-projected-configmaps-277af005-a094-4e10-a545-305dce744cfb no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:56:33.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1603" for this suite. • [SLOW TEST:6.414 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3240,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:56:33.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-5sz9 STEP: Creating a pod to test atomic-volume-subpath May 6 23:56:33.380: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-5sz9" in namespace "subpath-9955" to be "success or failure" May 6 23:56:33.384: INFO: Pod "pod-subpath-test-configmap-5sz9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140788ms May 6 23:56:35.412: INFO: Pod "pod-subpath-test-configmap-5sz9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03203238s May 6 23:56:37.416: INFO: Pod "pod-subpath-test-configmap-5sz9": Phase="Running", Reason="", readiness=true. Elapsed: 4.036057058s May 6 23:56:39.421: INFO: Pod "pod-subpath-test-configmap-5sz9": Phase="Running", Reason="", readiness=true. Elapsed: 6.040915419s May 6 23:56:41.425: INFO: Pod "pod-subpath-test-configmap-5sz9": Phase="Running", Reason="", readiness=true. Elapsed: 8.045275138s May 6 23:56:43.430: INFO: Pod "pod-subpath-test-configmap-5sz9": Phase="Running", Reason="", readiness=true. Elapsed: 10.049766166s May 6 23:56:45.435: INFO: Pod "pod-subpath-test-configmap-5sz9": Phase="Running", Reason="", readiness=true. Elapsed: 12.054589016s May 6 23:56:47.439: INFO: Pod "pod-subpath-test-configmap-5sz9": Phase="Running", Reason="", readiness=true. Elapsed: 14.058668227s May 6 23:56:49.443: INFO: Pod "pod-subpath-test-configmap-5sz9": Phase="Running", Reason="", readiness=true. Elapsed: 16.063232467s May 6 23:56:51.448: INFO: Pod "pod-subpath-test-configmap-5sz9": Phase="Running", Reason="", readiness=true. Elapsed: 18.067900533s May 6 23:56:53.453: INFO: Pod "pod-subpath-test-configmap-5sz9": Phase="Running", Reason="", readiness=true. Elapsed: 20.07258846s May 6 23:56:55.478: INFO: Pod "pod-subpath-test-configmap-5sz9": Phase="Running", Reason="", readiness=true. Elapsed: 22.098345047s May 6 23:56:57.814: INFO: Pod "pod-subpath-test-configmap-5sz9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.433992308s STEP: Saw pod success May 6 23:56:57.814: INFO: Pod "pod-subpath-test-configmap-5sz9" satisfied condition "success or failure" May 6 23:56:57.817: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-5sz9 container test-container-subpath-configmap-5sz9: STEP: delete the pod May 6 23:56:58.091: INFO: Waiting for pod pod-subpath-test-configmap-5sz9 to disappear May 6 23:56:58.119: INFO: Pod pod-subpath-test-configmap-5sz9 no longer exists STEP: Deleting pod pod-subpath-test-configmap-5sz9 May 6 23:56:58.119: INFO: Deleting pod "pod-subpath-test-configmap-5sz9" in namespace "subpath-9955" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:56:58.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9955" for this suite. • [SLOW TEST:24.819 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":205,"skipped":3302,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:56:58.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-5359/configmap-test-a3fd1174-1a3c-4ba1-952e-38c6d01cbdc8 STEP: Creating a pod to test consume configMaps May 6 23:56:58.378: INFO: Waiting up to 5m0s for pod "pod-configmaps-c7af8142-ca9b-4459-8217-ce4bb06129e0" in namespace "configmap-5359" to be "success or failure" May 6 23:56:58.382: INFO: Pod "pod-configmaps-c7af8142-ca9b-4459-8217-ce4bb06129e0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.835571ms May 6 23:57:00.386: INFO: Pod "pod-configmaps-c7af8142-ca9b-4459-8217-ce4bb06129e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008331981s May 6 23:57:02.390: INFO: Pod "pod-configmaps-c7af8142-ca9b-4459-8217-ce4bb06129e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012563545s STEP: Saw pod success May 6 23:57:02.390: INFO: Pod "pod-configmaps-c7af8142-ca9b-4459-8217-ce4bb06129e0" satisfied condition "success or failure" May 6 23:57:02.393: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c7af8142-ca9b-4459-8217-ce4bb06129e0 container env-test: STEP: delete the pod May 6 23:57:02.449: INFO: Waiting for pod pod-configmaps-c7af8142-ca9b-4459-8217-ce4bb06129e0 to disappear May 6 23:57:02.466: INFO: Pod pod-configmaps-c7af8142-ca9b-4459-8217-ce4bb06129e0 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:57:02.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5359" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3346,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:57:02.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container May 6 23:57:07.066: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2185 pod-service-account-df027597-8318-418c-bcd3-bd2ec88f5e41 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 6 23:57:07.265: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2185 pod-service-account-df027597-8318-418c-bcd3-bd2ec88f5e41 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 6 23:57:07.492: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2185 pod-service-account-df027597-8318-418c-bcd3-bd2ec88f5e41 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:57:07.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2185" for this suite. • [SLOW TEST:5.302 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":207,"skipped":3350,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:57:07.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:57:07.900: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-5cc3bfc0-a6c2-4a99-aae3-6557c885455c" in namespace "security-context-test-1564" to be "success or failure" May 6 23:57:07.936: INFO: Pod "busybox-readonly-false-5cc3bfc0-a6c2-4a99-aae3-6557c885455c": Phase="Pending", Reason="", readiness=false. Elapsed: 35.400679ms May 6 23:57:09.940: INFO: Pod "busybox-readonly-false-5cc3bfc0-a6c2-4a99-aae3-6557c885455c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039933851s May 6 23:57:11.951: INFO: Pod "busybox-readonly-false-5cc3bfc0-a6c2-4a99-aae3-6557c885455c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050695728s May 6 23:57:11.951: INFO: Pod "busybox-readonly-false-5cc3bfc0-a6c2-4a99-aae3-6557c885455c" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:57:11.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1564" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3359,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:57:11.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 6 23:57:12.109: INFO: Waiting up to 5m0s for pod "downward-api-fa18b72a-2106-4f2d-9abc-44639cca2382" in namespace "downward-api-732" to be "success or failure" May 6 23:57:12.112: INFO: Pod "downward-api-fa18b72a-2106-4f2d-9abc-44639cca2382": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236319ms May 6 23:57:14.116: INFO: Pod "downward-api-fa18b72a-2106-4f2d-9abc-44639cca2382": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006544646s May 6 23:57:16.257: INFO: Pod "downward-api-fa18b72a-2106-4f2d-9abc-44639cca2382": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.147355046s STEP: Saw pod success May 6 23:57:16.257: INFO: Pod "downward-api-fa18b72a-2106-4f2d-9abc-44639cca2382" satisfied condition "success or failure" May 6 23:57:16.260: INFO: Trying to get logs from node jerma-worker2 pod downward-api-fa18b72a-2106-4f2d-9abc-44639cca2382 container dapi-container: STEP: delete the pod May 6 23:57:16.421: INFO: Waiting for pod downward-api-fa18b72a-2106-4f2d-9abc-44639cca2382 to disappear May 6 23:57:16.449: INFO: Pod downward-api-fa18b72a-2106-4f2d-9abc-44639cca2382 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:57:16.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-732" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3363,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:57:16.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:57:16.802: INFO: Creating deployment "test-recreate-deployment" May 6 23:57:16.851: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 6 23:57:16.922: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 6 23:57:18.930: INFO: Waiting deployment "test-recreate-deployment" to complete May 6 23:57:18.934: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406236, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406236, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406237, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406236, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:57:20.963: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406236, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406236, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406237, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406236, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:57:22.938: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 6 23:57:22.946: INFO: Updating deployment test-recreate-deployment May 6 23:57:22.946: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 6 23:57:23.488: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-1915 /apis/apps/v1/namespaces/deployment-1915/deployments/test-recreate-deployment 523f0195-4b9b-46cf-973e-2851e5a530a4 14037011 2 2020-05-06 23:57:16 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c0ef58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-06 23:57:23 +0000 UTC,LastTransitionTime:2020-05-06 23:57:23 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-05-06 23:57:23 +0000 UTC,LastTransitionTime:2020-05-06 23:57:16 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 6 23:57:23.611: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-1915 /apis/apps/v1/namespaces/deployment-1915/replicasets/test-recreate-deployment-5f94c574ff 981ed036-3999-4078-9874-f5075a2a018a 14037008 1 2020-05-06 23:57:23 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 523f0195-4b9b-46cf-973e-2851e5a530a4 0xc003c0f2f7 0xc003c0f2f8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c0f358 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 23:57:23.611: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 6 23:57:23.612: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-1915 /apis/apps/v1/namespaces/deployment-1915/replicasets/test-recreate-deployment-799c574856 ddd00b16-e01d-4991-a8ca-4969c578ea13 14037000 2 2020-05-06 23:57:16 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 523f0195-4b9b-46cf-973e-2851e5a530a4 0xc003c0f3c7 0xc003c0f3c8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c0f438 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 6 23:57:23.779: INFO: Pod "test-recreate-deployment-5f94c574ff-fwb9b" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-fwb9b test-recreate-deployment-5f94c574ff- deployment-1915 /api/v1/namespaces/deployment-1915/pods/test-recreate-deployment-5f94c574ff-fwb9b 72299d58-c3f5-461d-8256-a0d1332aadc7 14037015 0 2020-05-06 23:57:23 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 981ed036-3999-4078-9874-f5075a2a018a 0xc003c0f8b7 0xc003c0f8b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gfph4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gfph4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gfph4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:57:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:57:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:57:23 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-06 23:57:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-06 23:57:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:57:23.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1915" for this suite. • [SLOW TEST:7.333 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":210,"skipped":3427,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:57:23.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 6 23:57:23.948: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:57:33.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7684" for this suite. • [SLOW TEST:9.723 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":211,"skipped":3445,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:57:33.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 6 23:57:33.583: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 6 23:57:33.601: INFO: Waiting for terminating namespaces to be deleted... May 6 23:57:33.604: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 6 23:57:33.610: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 23:57:33.610: INFO: Container kindnet-cni ready: true, restart count 0 May 6 23:57:33.610: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 23:57:33.610: INFO: Container kube-proxy ready: true, restart count 0 May 6 23:57:33.610: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 6 23:57:33.615: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 6 23:57:33.615: INFO: Container kube-hunter ready: false, restart count 0 May 6 23:57:33.615: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 23:57:33.615: INFO: Container kindnet-cni ready: true, restart count 0 May 6 23:57:33.615: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 6 23:57:33.615: INFO: Container kube-bench ready: false, restart count 0 May 6 23:57:33.615: INFO: pod-init-45971087-dca2-48b3-aac1-fac40936685b from init-container-7684 started at 2020-05-06 23:57:24 +0000 UTC (1 container statuses recorded) May 6 23:57:33.615: INFO: Container run1 ready: false, restart count 0 May 6 23:57:33.615: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 6 23:57:33.615: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-d4cef1c7-7a45-44dd-9c8b-2f7b527ef223 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-d4cef1c7-7a45-44dd-9c8b-2f7b527ef223 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-d4cef1c7-7a45-44dd-9c8b-2f7b527ef223 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:57:52.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2694" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:18.594 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":212,"skipped":3474,"failed":0} SSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:57:52.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:57:52.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-5851" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":213,"skipped":3481,"failed":0} ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:57:52.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 6 23:57:53.024: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3616 I0506 23:57:53.088312 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3616, replica count: 1 I0506 23:57:54.138800 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 23:57:55.139044 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0506 23:57:56.139236 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 6 23:57:56.546: INFO: Created: latency-svc-28hps May 6 23:57:56.557: INFO: Got endpoints: latency-svc-28hps [317.839882ms] May 6 23:57:56.634: INFO: Created: latency-svc-d9s5b May 6 23:57:56.702: INFO: Got endpoints: latency-svc-d9s5b [145.461195ms] May 6 23:57:56.826: INFO: Created: latency-svc-6vkxc May 6 23:57:56.836: INFO: Got endpoints: latency-svc-6vkxc [278.679289ms] May 6 23:57:56.911: INFO: Created: latency-svc-fbpqx May 6 23:57:56.970: INFO: Got endpoints: latency-svc-fbpqx [412.660896ms] May 6 23:57:57.001: INFO: Created: latency-svc-nbgsw May 6 23:57:57.020: INFO: Got endpoints: latency-svc-nbgsw [462.773957ms] May 6 23:57:57.066: INFO: Created: latency-svc-wgpd7 May 6 23:57:57.132: INFO: Got endpoints: latency-svc-wgpd7 [574.624179ms] May 6 23:57:57.181: INFO: Created: latency-svc-n2hp6 May 6 23:57:57.197: INFO: Got endpoints: latency-svc-n2hp6 [639.767234ms] May 6 23:57:57.330: INFO: Created: latency-svc-c7h6l May 6 23:57:57.361: INFO: Got endpoints: latency-svc-c7h6l [803.716309ms] May 6 23:57:57.362: INFO: Created: latency-svc-6xttx May 6 23:57:57.392: INFO: Got endpoints: latency-svc-6xttx [834.362362ms] May 6 23:57:57.479: INFO: Created: latency-svc-zmlfg May 6 23:57:57.482: INFO: Got endpoints: latency-svc-zmlfg [925.138209ms] May 6 23:57:57.546: INFO: Created: latency-svc-489c5 May 6 23:57:57.556: INFO: Got endpoints: latency-svc-489c5 [998.199389ms] May 6 23:57:57.577: INFO: Created: latency-svc-5lbcg May 6 23:57:57.623: INFO: Got endpoints: latency-svc-5lbcg [1.065192923s] May 6 23:57:57.678: INFO: Created: latency-svc-k2nxw May 6 23:57:57.695: INFO: Got endpoints: latency-svc-k2nxw [1.138182813s] May 6 23:57:57.780: INFO: Created: latency-svc-lrlr2 May 6 23:57:57.798: INFO: Got endpoints: latency-svc-lrlr2 [1.240779771s] May 6 23:57:57.835: INFO: Created: latency-svc-w5xm9 May 6 23:57:57.923: INFO: Got endpoints: latency-svc-w5xm9 [1.365782556s] May 6 23:57:57.962: INFO: Created: latency-svc-xbkwq May 6 23:57:57.979: INFO: Got endpoints: latency-svc-xbkwq [1.421637402s] May 6 23:57:58.079: INFO: Created: latency-svc-jclk5 May 6 23:57:58.092: INFO: Got endpoints: latency-svc-jclk5 [1.389379017s] May 6 23:57:58.117: INFO: Created: latency-svc-4cj7f May 6 23:57:58.134: INFO: Got endpoints: latency-svc-4cj7f [1.298385122s] May 6 23:57:58.292: INFO: Created: latency-svc-pm8zb May 6 23:57:58.344: INFO: Got endpoints: latency-svc-pm8zb [1.373958544s] May 6 23:57:58.413: INFO: Created: latency-svc-4s9t9 May 6 23:57:58.428: INFO: Got endpoints: latency-svc-4s9t9 [1.408376372s] May 6 23:57:58.452: INFO: Created: latency-svc-kn8x8 May 6 23:57:58.471: INFO: Got endpoints: latency-svc-kn8x8 [1.338863477s] May 6 23:57:58.582: INFO: Created: latency-svc-m8h8h May 6 23:57:58.585: INFO: Got endpoints: latency-svc-m8h8h [1.388449449s] May 6 23:57:58.772: INFO: Created: latency-svc-nq6zm May 6 23:57:59.276: INFO: Got endpoints: latency-svc-nq6zm [1.915187342s] May 6 23:57:59.479: INFO: Created: latency-svc-bwj57 May 6 23:57:59.483: INFO: Got endpoints: latency-svc-bwj57 [2.091197135s] May 6 23:57:59.702: INFO: Created: latency-svc-jdk5d May 6 23:57:59.774: INFO: Got endpoints: latency-svc-jdk5d [2.292157981s] May 6 23:57:59.951: INFO: Created: latency-svc-pcplk May 6 23:57:59.977: INFO: Got endpoints: latency-svc-pcplk [2.421612281s] May 6 23:58:00.128: INFO: Created: latency-svc-k9zdl May 6 23:58:00.145: INFO: Got endpoints: latency-svc-k9zdl [2.522777422s] May 6 23:58:00.270: INFO: Created: latency-svc-l88ml May 6 23:58:00.307: INFO: Got endpoints: latency-svc-l88ml [2.611182502s] May 6 23:58:00.413: INFO: Created: latency-svc-wnv9d May 6 23:58:00.421: INFO: Got endpoints: latency-svc-wnv9d [2.623140755s] May 6 23:58:00.446: INFO: Created: latency-svc-8hdkq May 6 23:58:00.463: INFO: Got endpoints: latency-svc-8hdkq [2.540190356s] May 6 23:58:00.492: INFO: Created: latency-svc-zdwkz May 6 23:58:00.511: INFO: Got endpoints: latency-svc-zdwkz [2.532082273s] May 6 23:58:00.577: INFO: Created: latency-svc-nt6hh May 6 23:58:00.596: INFO: Got endpoints: latency-svc-nt6hh [2.504258425s] May 6 23:58:00.650: INFO: Created: latency-svc-k8w9c May 6 23:58:00.669: INFO: Got endpoints: latency-svc-k8w9c [2.534489692s] May 6 23:58:00.734: INFO: Created: latency-svc-s57m4 May 6 23:58:00.775: INFO: Got endpoints: latency-svc-s57m4 [2.430675891s] May 6 23:58:00.844: INFO: Created: latency-svc-5zsz4 May 6 23:58:00.846: INFO: Got endpoints: latency-svc-5zsz4 [2.418078351s] May 6 23:58:00.908: INFO: Created: latency-svc-6m9vb May 6 23:58:00.928: INFO: Got endpoints: latency-svc-6m9vb [2.457214502s] May 6 23:58:00.982: INFO: Created: latency-svc-2h8xl May 6 23:58:00.993: INFO: Got endpoints: latency-svc-2h8xl [2.40784265s] May 6 23:58:01.039: INFO: Created: latency-svc-cmkk9 May 6 23:58:01.072: INFO: Got endpoints: latency-svc-cmkk9 [1.795163136s] May 6 23:58:01.162: INFO: Created: latency-svc-6tm9n May 6 23:58:01.168: INFO: Got endpoints: latency-svc-6tm9n [1.685284568s] May 6 23:58:01.201: INFO: Created: latency-svc-9c6xz May 6 23:58:01.217: INFO: Got endpoints: latency-svc-9c6xz [1.44282461s] May 6 23:58:01.248: INFO: Created: latency-svc-l6t7n May 6 23:58:01.266: INFO: Got endpoints: latency-svc-l6t7n [1.28830488s] May 6 23:58:01.326: INFO: Created: latency-svc-mctbt May 6 23:58:01.862: INFO: Got endpoints: latency-svc-mctbt [1.716940754s] May 6 23:58:02.128: INFO: Created: latency-svc-77fhz May 6 23:58:02.288: INFO: Got endpoints: latency-svc-77fhz [1.980772298s] May 6 23:58:02.361: INFO: Created: latency-svc-8bpls May 6 23:58:02.455: INFO: Got endpoints: latency-svc-8bpls [2.034304943s] May 6 23:58:02.798: INFO: Created: latency-svc-kp449 May 6 23:58:02.961: INFO: Got endpoints: latency-svc-kp449 [2.498015406s] May 6 23:58:03.034: INFO: Created: latency-svc-s2bcl May 6 23:58:03.108: INFO: Got endpoints: latency-svc-s2bcl [2.596349899s] May 6 23:58:03.579: INFO: Created: latency-svc-l8xzm May 6 23:58:03.742: INFO: Got endpoints: latency-svc-l8xzm [3.146163961s] May 6 23:58:03.997: INFO: Created: latency-svc-rxrgd May 6 23:58:04.028: INFO: Got endpoints: latency-svc-rxrgd [3.358764554s] May 6 23:58:04.251: INFO: Created: latency-svc-7nl4t May 6 23:58:04.255: INFO: Got endpoints: latency-svc-7nl4t [3.480063003s] May 6 23:58:04.522: INFO: Created: latency-svc-bz8mv May 6 23:58:04.603: INFO: Got endpoints: latency-svc-bz8mv [3.756792883s] May 6 23:58:04.702: INFO: Created: latency-svc-6skdq May 6 23:58:04.756: INFO: Got endpoints: latency-svc-6skdq [3.827948942s] May 6 23:58:04.865: INFO: Created: latency-svc-tjbr8 May 6 23:58:04.874: INFO: Got endpoints: latency-svc-tjbr8 [3.880551849s] May 6 23:58:04.916: INFO: Created: latency-svc-2s5pg May 6 23:58:04.928: INFO: Got endpoints: latency-svc-2s5pg [3.856066915s] May 6 23:58:05.020: INFO: Created: latency-svc-srhj2 May 6 23:58:05.024: INFO: Got endpoints: latency-svc-srhj2 [3.856156124s] May 6 23:58:05.154: INFO: Created: latency-svc-q54x6 May 6 23:58:05.168: INFO: Got endpoints: latency-svc-q54x6 [3.950340442s] May 6 23:58:05.206: INFO: Created: latency-svc-znpks May 6 23:58:05.217: INFO: Got endpoints: latency-svc-znpks [3.951252732s] May 6 23:58:05.243: INFO: Created: latency-svc-bb624 May 6 23:58:05.253: INFO: Got endpoints: latency-svc-bb624 [3.391011541s] May 6 23:58:05.323: INFO: Created: latency-svc-28t8g May 6 23:58:05.385: INFO: Got endpoints: latency-svc-28t8g [3.097593449s] May 6 23:58:05.764: INFO: Created: latency-svc-b64p5 May 6 23:58:05.829: INFO: Got endpoints: latency-svc-b64p5 [3.373435338s] May 6 23:58:05.940: INFO: Created: latency-svc-mqwc6 May 6 23:58:05.948: INFO: Got endpoints: latency-svc-mqwc6 [2.986950843s] May 6 23:58:06.351: INFO: Created: latency-svc-s25nv May 6 23:58:06.363: INFO: Got endpoints: latency-svc-s25nv [3.255557186s] May 6 23:58:06.612: INFO: Created: latency-svc-wpwv9 May 6 23:58:06.615: INFO: Got endpoints: latency-svc-wpwv9 [2.872911464s] May 6 23:58:07.173: INFO: Created: latency-svc-hwlzd May 6 23:58:07.184: INFO: Got endpoints: latency-svc-hwlzd [3.15615909s] May 6 23:58:07.362: INFO: Created: latency-svc-w4vl9 May 6 23:58:07.382: INFO: Got endpoints: latency-svc-w4vl9 [3.127306361s] May 6 23:58:07.474: INFO: Created: latency-svc-26z7p May 6 23:58:07.478: INFO: Got endpoints: latency-svc-26z7p [2.874542966s] May 6 23:58:08.011: INFO: Created: latency-svc-mv2f6 May 6 23:58:08.064: INFO: Got endpoints: latency-svc-mv2f6 [3.307991037s] May 6 23:58:08.348: INFO: Created: latency-svc-fd69g May 6 23:58:08.371: INFO: Got endpoints: latency-svc-fd69g [3.497544059s] May 6 23:58:08.521: INFO: Created: latency-svc-h2gg4 May 6 23:58:08.539: INFO: Got endpoints: latency-svc-h2gg4 [3.611039536s] May 6 23:58:08.598: INFO: Created: latency-svc-7t4xv May 6 23:58:08.612: INFO: Got endpoints: latency-svc-7t4xv [3.587180124s] May 6 23:58:08.682: INFO: Created: latency-svc-4hngm May 6 23:58:08.720: INFO: Got endpoints: latency-svc-4hngm [3.551994689s] May 6 23:58:08.845: INFO: Created: latency-svc-9tdz6 May 6 23:58:08.864: INFO: Got endpoints: latency-svc-9tdz6 [3.646750279s] May 6 23:58:08.934: INFO: Created: latency-svc-6tjnn May 6 23:58:09.168: INFO: Got endpoints: latency-svc-6tjnn [3.914361369s] May 6 23:58:09.187: INFO: Created: latency-svc-kk8zw May 6 23:58:09.200: INFO: Got endpoints: latency-svc-kk8zw [3.81432693s] May 6 23:58:09.263: INFO: Created: latency-svc-tnqjh May 6 23:58:09.323: INFO: Got endpoints: latency-svc-tnqjh [3.494406715s] May 6 23:58:09.347: INFO: Created: latency-svc-2mntq May 6 23:58:09.363: INFO: Got endpoints: latency-svc-2mntq [3.414277791s] May 6 23:58:09.516: INFO: Created: latency-svc-kv4pc May 6 23:58:09.522: INFO: Got endpoints: latency-svc-kv4pc [3.158149311s] May 6 23:58:09.601: INFO: Created: latency-svc-d9mz5 May 6 23:58:09.615: INFO: Got endpoints: latency-svc-d9mz5 [2.999550068s] May 6 23:58:09.688: INFO: Created: latency-svc-94k4n May 6 23:58:09.699: INFO: Got endpoints: latency-svc-94k4n [2.515017871s] May 6 23:58:09.725: INFO: Created: latency-svc-6bwf2 May 6 23:58:09.754: INFO: Got endpoints: latency-svc-6bwf2 [2.371561443s] May 6 23:58:09.786: INFO: Created: latency-svc-lvvqj May 6 23:58:09.832: INFO: Got endpoints: latency-svc-lvvqj [2.354283325s] May 6 23:58:09.877: INFO: Created: latency-svc-l46g9 May 6 23:58:09.917: INFO: Got endpoints: latency-svc-l46g9 [1.853015568s] May 6 23:58:09.995: INFO: Created: latency-svc-xgbpb May 6 23:58:09.998: INFO: Got endpoints: latency-svc-xgbpb [1.626341342s] May 6 23:58:10.037: INFO: Created: latency-svc-ttjzc May 6 23:58:10.060: INFO: Got endpoints: latency-svc-ttjzc [1.521521105s] May 6 23:58:10.091: INFO: Created: latency-svc-96bqt May 6 23:58:10.168: INFO: Got endpoints: latency-svc-96bqt [1.556120629s] May 6 23:58:10.171: INFO: Created: latency-svc-xlcm6 May 6 23:58:10.186: INFO: Got endpoints: latency-svc-xlcm6 [1.465870847s] May 6 23:58:10.267: INFO: Created: latency-svc-scjb6 May 6 23:58:10.318: INFO: Got endpoints: latency-svc-scjb6 [1.454537094s] May 6 23:58:10.339: INFO: Created: latency-svc-ztcxs May 6 23:58:10.354: INFO: Got endpoints: latency-svc-ztcxs [1.185607842s] May 6 23:58:10.399: INFO: Created: latency-svc-2t6f8 May 6 23:58:10.449: INFO: Got endpoints: latency-svc-2t6f8 [131.085759ms] May 6 23:58:10.469: INFO: Created: latency-svc-9p4mz May 6 23:58:10.492: INFO: Got endpoints: latency-svc-9p4mz [1.292861422s] May 6 23:58:10.538: INFO: Created: latency-svc-phld7 May 6 23:58:10.604: INFO: Got endpoints: latency-svc-phld7 [1.280963585s] May 6 23:58:10.625: INFO: Created: latency-svc-h287s May 6 23:58:10.647: INFO: Got endpoints: latency-svc-h287s [1.283847124s] May 6 23:58:10.796: INFO: Created: latency-svc-h45hh May 6 23:58:10.800: INFO: Got endpoints: latency-svc-h45hh [1.278705495s] May 6 23:58:10.892: INFO: Created: latency-svc-kfxmw May 6 23:58:10.895: INFO: Got endpoints: latency-svc-kfxmw [1.279828615s] May 6 23:58:10.980: INFO: Created: latency-svc-xp6xb May 6 23:58:11.027: INFO: Got endpoints: latency-svc-xp6xb [1.328158936s] May 6 23:58:11.563: INFO: Created: latency-svc-rxnwp May 6 23:58:11.653: INFO: Got endpoints: latency-svc-rxnwp [1.898905358s] May 6 23:58:11.697: INFO: Created: latency-svc-n48hf May 6 23:58:11.746: INFO: Got endpoints: latency-svc-n48hf [1.914144102s] May 6 23:58:11.952: INFO: Created: latency-svc-jqqxr May 6 23:58:12.307: INFO: Got endpoints: latency-svc-jqqxr [2.389729395s] May 6 23:58:12.308: INFO: Created: latency-svc-4zh9f May 6 23:58:12.612: INFO: Got endpoints: latency-svc-4zh9f [2.613890605s] May 6 23:58:12.666: INFO: Created: latency-svc-hd2fc May 6 23:58:12.694: INFO: Got endpoints: latency-svc-hd2fc [2.633407559s] May 6 23:58:12.767: INFO: Created: latency-svc-qmwd8 May 6 23:58:12.778: INFO: Got endpoints: latency-svc-qmwd8 [2.609827012s] May 6 23:58:12.826: INFO: Created: latency-svc-hg6fd May 6 23:58:12.833: INFO: Got endpoints: latency-svc-hg6fd [2.647304892s] May 6 23:58:12.936: INFO: Created: latency-svc-kp69x May 6 23:58:12.952: INFO: Got endpoints: latency-svc-kp69x [2.598598932s] May 6 23:58:12.984: INFO: Created: latency-svc-5n4pb May 6 23:58:13.108: INFO: Got endpoints: latency-svc-5n4pb [2.658209216s] May 6 23:58:13.271: INFO: Created: latency-svc-2vc5k May 6 23:58:13.294: INFO: Got endpoints: latency-svc-2vc5k [2.801494064s] May 6 23:58:13.351: INFO: Created: latency-svc-qf82s May 6 23:58:13.370: INFO: Got endpoints: latency-svc-qf82s [2.76514486s] May 6 23:58:13.437: INFO: Created: latency-svc-rvvcj May 6 23:58:13.466: INFO: Got endpoints: latency-svc-rvvcj [2.819765462s] May 6 23:58:13.501: INFO: Created: latency-svc-9pp94 May 6 23:58:13.517: INFO: Got endpoints: latency-svc-9pp94 [2.71666212s] May 6 23:58:13.575: INFO: Created: latency-svc-rcsz2 May 6 23:58:13.595: INFO: Got endpoints: latency-svc-rcsz2 [2.700345055s] May 6 23:58:13.641: INFO: Created: latency-svc-dnwll May 6 23:58:13.736: INFO: Got endpoints: latency-svc-dnwll [2.709123336s] May 6 23:58:13.753: INFO: Created: latency-svc-c4mnd May 6 23:58:13.771: INFO: Got endpoints: latency-svc-c4mnd [2.118078797s] May 6 23:58:13.893: INFO: Created: latency-svc-9sk56 May 6 23:58:13.908: INFO: Got endpoints: latency-svc-9sk56 [2.161519032s] May 6 23:58:14.036: INFO: Created: latency-svc-v4lcc May 6 23:58:14.039: INFO: Got endpoints: latency-svc-v4lcc [1.73193741s] May 6 23:58:14.087: INFO: Created: latency-svc-nm9lk May 6 23:58:14.126: INFO: Got endpoints: latency-svc-nm9lk [1.513871535s] May 6 23:58:14.222: INFO: Created: latency-svc-vkk45 May 6 23:58:14.225: INFO: Got endpoints: latency-svc-vkk45 [1.531398667s] May 6 23:58:14.293: INFO: Created: latency-svc-hf8k7 May 6 23:58:14.317: INFO: Got endpoints: latency-svc-hf8k7 [1.53929371s] May 6 23:58:14.369: INFO: Created: latency-svc-hz2pn May 6 23:58:14.412: INFO: Got endpoints: latency-svc-hz2pn [1.579355645s] May 6 23:58:14.491: INFO: Created: latency-svc-z5fdm May 6 23:58:14.494: INFO: Got endpoints: latency-svc-z5fdm [1.54149572s] May 6 23:58:14.562: INFO: Created: latency-svc-wsm4x May 6 23:58:14.575: INFO: Got endpoints: latency-svc-wsm4x [1.467191264s] May 6 23:58:14.671: INFO: Created: latency-svc-9ddn2 May 6 23:58:14.724: INFO: Created: latency-svc-tcqcm May 6 23:58:14.724: INFO: Got endpoints: latency-svc-9ddn2 [1.429971708s] May 6 23:58:14.767: INFO: Got endpoints: latency-svc-tcqcm [1.397502596s] May 6 23:58:14.856: INFO: Created: latency-svc-bgd8d May 6 23:58:14.876: INFO: Got endpoints: latency-svc-bgd8d [1.40991829s] May 6 23:58:14.946: INFO: Created: latency-svc-r9hxx May 6 23:58:14.982: INFO: Got endpoints: latency-svc-r9hxx [1.465223589s] May 6 23:58:15.001: INFO: Created: latency-svc-w5d45 May 6 23:58:15.015: INFO: Got endpoints: latency-svc-w5d45 [1.419601765s] May 6 23:58:15.140: INFO: Created: latency-svc-58t44 May 6 23:58:15.144: INFO: Got endpoints: latency-svc-58t44 [1.407869377s] May 6 23:58:15.204: INFO: Created: latency-svc-m22nc May 6 23:58:15.206: INFO: Got endpoints: latency-svc-m22nc [1.435219625s] May 6 23:58:15.309: INFO: Created: latency-svc-t7btj May 6 23:58:15.311: INFO: Got endpoints: latency-svc-t7btj [1.402533681s] May 6 23:58:15.353: INFO: Created: latency-svc-xz6bj May 6 23:58:15.375: INFO: Got endpoints: latency-svc-xz6bj [1.335781876s] May 6 23:58:15.498: INFO: Created: latency-svc-tgd9j May 6 23:58:15.522: INFO: Got endpoints: latency-svc-tgd9j [1.39636277s] May 6 23:58:15.557: INFO: Created: latency-svc-nhxfk May 6 23:58:15.579: INFO: Got endpoints: latency-svc-nhxfk [1.353496514s] May 6 23:58:15.684: INFO: Created: latency-svc-wzfzm May 6 23:58:15.707: INFO: Created: latency-svc-762jk May 6 23:58:15.707: INFO: Got endpoints: latency-svc-wzfzm [1.389903282s] May 6 23:58:15.718: INFO: Got endpoints: latency-svc-762jk [1.30524628s] May 6 23:58:15.748: INFO: Created: latency-svc-lkkfw May 6 23:58:15.754: INFO: Got endpoints: latency-svc-lkkfw [1.260040898s] May 6 23:58:15.817: INFO: Created: latency-svc-r2f9d May 6 23:58:15.842: INFO: Got endpoints: latency-svc-r2f9d [1.26716649s] May 6 23:58:15.887: INFO: Created: latency-svc-c9gcp May 6 23:58:15.940: INFO: Got endpoints: latency-svc-c9gcp [1.215701399s] May 6 23:58:16.014: INFO: Created: latency-svc-gfn9b May 6 23:58:16.037: INFO: Got endpoints: latency-svc-gfn9b [1.269543791s] May 6 23:58:16.102: INFO: Created: latency-svc-bkmn5 May 6 23:58:16.121: INFO: Got endpoints: latency-svc-bkmn5 [1.244482028s] May 6 23:58:16.181: INFO: Created: latency-svc-w2xcp May 6 23:58:16.311: INFO: Got endpoints: latency-svc-w2xcp [1.328802372s] May 6 23:58:16.314: INFO: Created: latency-svc-sfq9n May 6 23:58:16.319: INFO: Got endpoints: latency-svc-sfq9n [1.304052751s] May 6 23:58:16.337: INFO: Created: latency-svc-bvhtw May 6 23:58:16.380: INFO: Got endpoints: latency-svc-bvhtw [1.235361215s] May 6 23:58:16.467: INFO: Created: latency-svc-fzbmh May 6 23:58:16.471: INFO: Got endpoints: latency-svc-fzbmh [1.264286542s] May 6 23:58:16.523: INFO: Created: latency-svc-gzstr May 6 23:58:16.536: INFO: Got endpoints: latency-svc-gzstr [1.225432524s] May 6 23:58:16.623: INFO: Created: latency-svc-6j55b May 6 23:58:16.651: INFO: Got endpoints: latency-svc-6j55b [1.275928618s] May 6 23:58:16.653: INFO: Created: latency-svc-6sknm May 6 23:58:16.697: INFO: Got endpoints: latency-svc-6sknm [1.174933137s] May 6 23:58:16.767: INFO: Created: latency-svc-4shjq May 6 23:58:16.770: INFO: Got endpoints: latency-svc-4shjq [1.190574536s] May 6 23:58:16.928: INFO: Created: latency-svc-4r6c5 May 6 23:58:16.945: INFO: Got endpoints: latency-svc-4r6c5 [1.237772095s] May 6 23:58:16.976: INFO: Created: latency-svc-hjf28 May 6 23:58:16.987: INFO: Got endpoints: latency-svc-hjf28 [1.268976614s] May 6 23:58:17.004: INFO: Created: latency-svc-l9c9p May 6 23:58:17.017: INFO: Got endpoints: latency-svc-l9c9p [1.263549514s] May 6 23:58:17.090: INFO: Created: latency-svc-bx8mc May 6 23:58:17.093: INFO: Got endpoints: latency-svc-bx8mc [1.250868645s] May 6 23:58:17.114: INFO: Created: latency-svc-vqrgt May 6 23:58:17.131: INFO: Got endpoints: latency-svc-vqrgt [1.191448004s] May 6 23:58:17.160: INFO: Created: latency-svc-qbjz7 May 6 23:58:17.174: INFO: Got endpoints: latency-svc-qbjz7 [1.137030559s] May 6 23:58:17.270: INFO: Created: latency-svc-nrd8n May 6 23:58:17.272: INFO: Got endpoints: latency-svc-nrd8n [1.150904921s] May 6 23:58:17.324: INFO: Created: latency-svc-fxxlx May 6 23:58:17.337: INFO: Got endpoints: latency-svc-fxxlx [1.025584364s] May 6 23:58:17.476: INFO: Created: latency-svc-zdftc May 6 23:58:17.479: INFO: Got endpoints: latency-svc-zdftc [1.160305639s] May 6 23:58:17.551: INFO: Created: latency-svc-m92kz May 6 23:58:17.571: INFO: Got endpoints: latency-svc-m92kz [1.1910378s] May 6 23:58:17.646: INFO: Created: latency-svc-d8tc6 May 6 23:58:17.665: INFO: Got endpoints: latency-svc-d8tc6 [1.193906434s] May 6 23:58:17.707: INFO: Created: latency-svc-4chnr May 6 23:58:17.727: INFO: Got endpoints: latency-svc-4chnr [1.191075637s] May 6 23:58:17.797: INFO: Created: latency-svc-gk2dp May 6 23:58:17.805: INFO: Got endpoints: latency-svc-gk2dp [1.15436751s] May 6 23:58:17.843: INFO: Created: latency-svc-d2fww May 6 23:58:17.882: INFO: Got endpoints: latency-svc-d2fww [1.185119644s] May 6 23:58:17.976: INFO: Created: latency-svc-q4t4t May 6 23:58:18.023: INFO: Got endpoints: latency-svc-q4t4t [1.252866631s] May 6 23:58:18.065: INFO: Created: latency-svc-5cq74 May 6 23:58:18.162: INFO: Got endpoints: latency-svc-5cq74 [1.217149021s] May 6 23:58:18.164: INFO: Created: latency-svc-kq2nv May 6 23:58:18.174: INFO: Got endpoints: latency-svc-kq2nv [1.187420895s] May 6 23:58:18.223: INFO: Created: latency-svc-jwdtz May 6 23:58:18.324: INFO: Got endpoints: latency-svc-jwdtz [1.3064532s] May 6 23:58:18.334: INFO: Created: latency-svc-zrkx2 May 6 23:58:18.341: INFO: Got endpoints: latency-svc-zrkx2 [1.24742554s] May 6 23:58:18.384: INFO: Created: latency-svc-vl4h6 May 6 23:58:18.407: INFO: Got endpoints: latency-svc-vl4h6 [1.275339573s] May 6 23:58:18.473: INFO: Created: latency-svc-t5cxl May 6 23:58:18.477: INFO: Got endpoints: latency-svc-t5cxl [1.303327147s] May 6 23:58:18.546: INFO: Created: latency-svc-cp6z6 May 6 23:58:18.570: INFO: Got endpoints: latency-svc-cp6z6 [1.297696051s] May 6 23:58:18.606: INFO: Created: latency-svc-k5rgl May 6 23:58:18.611: INFO: Got endpoints: latency-svc-k5rgl [1.274195364s] May 6 23:58:18.636: INFO: Created: latency-svc-rcdwl May 6 23:58:18.642: INFO: Got endpoints: latency-svc-rcdwl [1.162102184s] May 6 23:58:18.685: INFO: Created: latency-svc-glhg8 May 6 23:58:18.737: INFO: Got endpoints: latency-svc-glhg8 [1.165805706s] May 6 23:58:18.746: INFO: Created: latency-svc-v2jh6 May 6 23:58:18.763: INFO: Got endpoints: latency-svc-v2jh6 [1.098087679s] May 6 23:58:18.789: INFO: Created: latency-svc-b9j8z May 6 23:58:18.792: INFO: Got endpoints: latency-svc-b9j8z [1.065043207s] May 6 23:58:18.835: INFO: Created: latency-svc-nqtbm May 6 23:58:18.946: INFO: Got endpoints: latency-svc-nqtbm [1.14050937s] May 6 23:58:18.947: INFO: Created: latency-svc-zcksg May 6 23:58:18.955: INFO: Got endpoints: latency-svc-zcksg [1.072936043s] May 6 23:58:18.991: INFO: Created: latency-svc-2mhtq May 6 23:58:19.009: INFO: Got endpoints: latency-svc-2mhtq [986.840593ms] May 6 23:58:19.085: INFO: Created: latency-svc-fbvdf May 6 23:58:19.086: INFO: Got endpoints: latency-svc-fbvdf [924.353457ms] May 6 23:58:19.134: INFO: Created: latency-svc-2qwkz May 6 23:58:19.154: INFO: Got endpoints: latency-svc-2qwkz [980.002428ms] May 6 23:58:19.184: INFO: Created: latency-svc-rdv57 May 6 23:58:19.257: INFO: Got endpoints: latency-svc-rdv57 [933.570144ms] May 6 23:58:19.307: INFO: Created: latency-svc-lmplb May 6 23:58:19.432: INFO: Got endpoints: latency-svc-lmplb [1.091271288s] May 6 23:58:19.436: INFO: Created: latency-svc-fdzzp May 6 23:58:19.448: INFO: Got endpoints: latency-svc-fdzzp [1.041227058s] May 6 23:58:19.487: INFO: Created: latency-svc-7db96 May 6 23:58:19.524: INFO: Got endpoints: latency-svc-7db96 [1.046963144s] May 6 23:58:19.598: INFO: Created: latency-svc-ktrlm May 6 23:58:19.623: INFO: Got endpoints: latency-svc-ktrlm [1.053182872s] May 6 23:58:19.650: INFO: Created: latency-svc-w66tl May 6 23:58:19.659: INFO: Got endpoints: latency-svc-w66tl [1.048160623s] May 6 23:58:19.737: INFO: Created: latency-svc-b8slt May 6 23:58:19.739: INFO: Got endpoints: latency-svc-b8slt [1.097564275s] May 6 23:58:19.794: INFO: Created: latency-svc-ghrnt May 6 23:58:19.821: INFO: Got endpoints: latency-svc-ghrnt [1.084831764s] May 6 23:58:19.886: INFO: Created: latency-svc-88qll May 6 23:58:19.889: INFO: Got endpoints: latency-svc-88qll [1.126641898s] May 6 23:58:19.969: INFO: Created: latency-svc-2g4dz May 6 23:58:20.024: INFO: Got endpoints: latency-svc-2g4dz [1.231639649s] May 6 23:58:20.069: INFO: Created: latency-svc-mnbw4 May 6 23:58:20.174: INFO: Got endpoints: latency-svc-mnbw4 [1.227566384s] May 6 23:58:20.175: INFO: Created: latency-svc-9cxgf May 6 23:58:20.182: INFO: Got endpoints: latency-svc-9cxgf [1.226991731s] May 6 23:58:20.331: INFO: Created: latency-svc-qv2c6 May 6 23:58:20.333: INFO: Got endpoints: latency-svc-qv2c6 [1.323120839s] May 6 23:58:20.376: INFO: Created: latency-svc-94kpn May 6 23:58:20.399: INFO: Got endpoints: latency-svc-94kpn [1.3122945s] May 6 23:58:20.429: INFO: Created: latency-svc-55hrj May 6 23:58:20.521: INFO: Got endpoints: latency-svc-55hrj [1.366848527s] May 6 23:58:20.539: INFO: Created: latency-svc-xhv5t May 6 23:58:20.555: INFO: Got endpoints: latency-svc-xhv5t [1.297716235s] May 6 23:58:20.589: INFO: Created: latency-svc-2nqjs May 6 23:58:20.603: INFO: Got endpoints: latency-svc-2nqjs [1.171053738s] May 6 23:58:20.720: INFO: Created: latency-svc-rpdpk May 6 23:58:20.772: INFO: Got endpoints: latency-svc-rpdpk [1.324173205s] May 6 23:58:20.911: INFO: Created: latency-svc-k4rqb May 6 23:58:20.913: INFO: Got endpoints: latency-svc-k4rqb [1.389027159s] May 6 23:58:21.080: INFO: Created: latency-svc-gdw4r May 6 23:58:21.083: INFO: Got endpoints: latency-svc-gdw4r [1.459811087s] May 6 23:58:21.144: INFO: Created: latency-svc-m7dnh May 6 23:58:21.168: INFO: Got endpoints: latency-svc-m7dnh [1.509114239s] May 6 23:58:21.225: INFO: Created: latency-svc-z8lhc May 6 23:58:21.228: INFO: Got endpoints: latency-svc-z8lhc [1.488657488s] May 6 23:58:21.359: INFO: Created: latency-svc-lwq5r May 6 23:58:21.362: INFO: Got endpoints: latency-svc-lwq5r [1.540852384s] May 6 23:58:21.407: INFO: Created: latency-svc-d6ttv May 6 23:58:21.426: INFO: Got endpoints: latency-svc-d6ttv [1.536670823s] May 6 23:58:21.510: INFO: Created: latency-svc-gcxdd May 6 23:58:21.516: INFO: Got endpoints: latency-svc-gcxdd [1.491812717s] May 6 23:58:21.516: INFO: Latencies: [131.085759ms 145.461195ms 278.679289ms 412.660896ms 462.773957ms 574.624179ms 639.767234ms 803.716309ms 834.362362ms 924.353457ms 925.138209ms 933.570144ms 980.002428ms 986.840593ms 998.199389ms 1.025584364s 1.041227058s 1.046963144s 1.048160623s 1.053182872s 1.065043207s 1.065192923s 1.072936043s 1.084831764s 1.091271288s 1.097564275s 1.098087679s 1.126641898s 1.137030559s 1.138182813s 1.14050937s 1.150904921s 1.15436751s 1.160305639s 1.162102184s 1.165805706s 1.171053738s 1.174933137s 1.185119644s 1.185607842s 1.187420895s 1.190574536s 1.1910378s 1.191075637s 1.191448004s 1.193906434s 1.215701399s 1.217149021s 1.225432524s 1.226991731s 1.227566384s 1.231639649s 1.235361215s 1.237772095s 1.240779771s 1.244482028s 1.24742554s 1.250868645s 1.252866631s 1.260040898s 1.263549514s 1.264286542s 1.26716649s 1.268976614s 1.269543791s 1.274195364s 1.275339573s 1.275928618s 1.278705495s 1.279828615s 1.280963585s 1.283847124s 1.28830488s 1.292861422s 1.297696051s 1.297716235s 1.298385122s 1.303327147s 1.304052751s 1.30524628s 1.3064532s 1.3122945s 1.323120839s 1.324173205s 1.328158936s 1.328802372s 1.335781876s 1.338863477s 1.353496514s 1.365782556s 1.366848527s 1.373958544s 1.388449449s 1.389027159s 1.389379017s 1.389903282s 1.39636277s 1.397502596s 1.402533681s 1.407869377s 1.408376372s 1.40991829s 1.419601765s 1.421637402s 1.429971708s 1.435219625s 1.44282461s 1.454537094s 1.459811087s 1.465223589s 1.465870847s 1.467191264s 1.488657488s 1.491812717s 1.509114239s 1.513871535s 1.521521105s 1.531398667s 1.536670823s 1.53929371s 1.540852384s 1.54149572s 1.556120629s 1.579355645s 1.626341342s 1.685284568s 1.716940754s 1.73193741s 1.795163136s 1.853015568s 1.898905358s 1.914144102s 1.915187342s 1.980772298s 2.034304943s 2.091197135s 2.118078797s 2.161519032s 2.292157981s 2.354283325s 2.371561443s 2.389729395s 2.40784265s 2.418078351s 2.421612281s 2.430675891s 2.457214502s 2.498015406s 2.504258425s 2.515017871s 2.522777422s 2.532082273s 2.534489692s 2.540190356s 2.596349899s 2.598598932s 2.609827012s 2.611182502s 2.613890605s 2.623140755s 2.633407559s 2.647304892s 2.658209216s 2.700345055s 2.709123336s 2.71666212s 2.76514486s 2.801494064s 2.819765462s 2.872911464s 2.874542966s 2.986950843s 2.999550068s 3.097593449s 3.127306361s 3.146163961s 3.15615909s 3.158149311s 3.255557186s 3.307991037s 3.358764554s 3.373435338s 3.391011541s 3.414277791s 3.480063003s 3.494406715s 3.497544059s 3.551994689s 3.587180124s 3.611039536s 3.646750279s 3.756792883s 3.81432693s 3.827948942s 3.856066915s 3.856156124s 3.880551849s 3.914361369s 3.950340442s 3.951252732s] May 6 23:58:21.516: INFO: 50 %ile: 1.408376372s May 6 23:58:21.516: INFO: 90 %ile: 3.358764554s May 6 23:58:21.516: INFO: 99 %ile: 3.950340442s May 6 23:58:21.516: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:58:21.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3616" for this suite. • [SLOW TEST:28.793 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":214,"skipped":3481,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:58:21.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 6 23:58:27.142: INFO: Successfully updated pod "labelsupdate165cd0a9-80f1-4ff3-a53d-ae29d482c435" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:58:29.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8517" for this suite. • [SLOW TEST:8.109 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3499,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:58:29.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 23:58:31.551: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 23:58:33.887: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406311, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406311, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406311, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406311, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:58:35.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406311, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406311, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406311, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406311, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 23:58:39.456: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:58:40.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5268" for this suite. STEP: Destroying namespace "webhook-5268-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.788 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":216,"skipped":3500,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:58:41.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:58:53.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2190" for this suite. • [SLOW TEST:12.297 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":217,"skipped":3515,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:58:53.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 23:58:54.603: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 23:58:56.924: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406334, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406334, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406334, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406334, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 6 23:58:58.947: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406334, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406334, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406334, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406334, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 23:59:02.040: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 6 23:59:08.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-2414 to-be-attached-pod -i -c=container1' May 6 23:59:08.807: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:59:09.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2414" for this suite. STEP: Destroying namespace "webhook-2414-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:15.918 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":218,"skipped":3551,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:59:09.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 6 23:59:11.267: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 6 23:59:13.944: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406351, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406351, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406351, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406351, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 6 23:59:17.397: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:59:17.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2048" for this suite. STEP: Destroying namespace "webhook-2048-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.566 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":219,"skipped":3558,"failed":0} SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:59:18.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-f12c0bee-63f9-4aae-b706-677bb5fba52d STEP: Creating a pod to test consume secrets May 6 23:59:19.145: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-faf7bf0a-8fe7-4b85-b9e2-881abec05107" in namespace "projected-2825" to be "success or failure" May 6 23:59:19.348: INFO: Pod "pod-projected-secrets-faf7bf0a-8fe7-4b85-b9e2-881abec05107": Phase="Pending", Reason="", readiness=false. Elapsed: 203.050432ms May 6 23:59:21.379: INFO: Pod "pod-projected-secrets-faf7bf0a-8fe7-4b85-b9e2-881abec05107": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233676209s May 6 23:59:23.528: INFO: Pod "pod-projected-secrets-faf7bf0a-8fe7-4b85-b9e2-881abec05107": Phase="Pending", Reason="", readiness=false. Elapsed: 4.383311377s May 6 23:59:25.532: INFO: Pod "pod-projected-secrets-faf7bf0a-8fe7-4b85-b9e2-881abec05107": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.386743736s STEP: Saw pod success May 6 23:59:25.532: INFO: Pod "pod-projected-secrets-faf7bf0a-8fe7-4b85-b9e2-881abec05107" satisfied condition "success or failure" May 6 23:59:25.534: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-faf7bf0a-8fe7-4b85-b9e2-881abec05107 container projected-secret-volume-test: STEP: delete the pod May 6 23:59:25.599: INFO: Waiting for pod pod-projected-secrets-faf7bf0a-8fe7-4b85-b9e2-881abec05107 to disappear May 6 23:59:25.608: INFO: Pod pod-projected-secrets-faf7bf0a-8fe7-4b85-b9e2-881abec05107 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:59:25.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2825" for this suite. • [SLOW TEST:7.338 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3560,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:59:25.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 6 23:59:25.845: INFO: Waiting up to 5m0s for pod "pod-5419e2e3-8548-49de-98e5-fd1da3086684" in namespace "emptydir-2072" to be "success or failure" May 6 23:59:25.902: INFO: Pod "pod-5419e2e3-8548-49de-98e5-fd1da3086684": Phase="Pending", Reason="", readiness=false. Elapsed: 56.327874ms May 6 23:59:28.060: INFO: Pod "pod-5419e2e3-8548-49de-98e5-fd1da3086684": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214132371s May 6 23:59:30.063: INFO: Pod "pod-5419e2e3-8548-49de-98e5-fd1da3086684": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.2173771s STEP: Saw pod success May 6 23:59:30.063: INFO: Pod "pod-5419e2e3-8548-49de-98e5-fd1da3086684" satisfied condition "success or failure" May 6 23:59:30.066: INFO: Trying to get logs from node jerma-worker2 pod pod-5419e2e3-8548-49de-98e5-fd1da3086684 container test-container: STEP: delete the pod May 6 23:59:30.339: INFO: Waiting for pod pod-5419e2e3-8548-49de-98e5-fd1da3086684 to disappear May 6 23:59:30.588: INFO: Pod pod-5419e2e3-8548-49de-98e5-fd1da3086684 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:59:30.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2072" for this suite. • [SLOW TEST:5.226 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3564,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:59:30.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 6 23:59:31.269: INFO: Waiting up to 5m0s for pod "pod-1eb58a8d-ff17-4cfc-a4a9-33266732d14d" in namespace "emptydir-6929" to be "success or failure" May 6 23:59:31.372: INFO: Pod "pod-1eb58a8d-ff17-4cfc-a4a9-33266732d14d": Phase="Pending", Reason="", readiness=false. Elapsed: 102.975739ms May 6 23:59:33.466: INFO: Pod "pod-1eb58a8d-ff17-4cfc-a4a9-33266732d14d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.196146798s May 6 23:59:35.558: INFO: Pod "pod-1eb58a8d-ff17-4cfc-a4a9-33266732d14d": Phase="Running", Reason="", readiness=true. Elapsed: 4.288953165s May 6 23:59:37.561: INFO: Pod "pod-1eb58a8d-ff17-4cfc-a4a9-33266732d14d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.291717345s STEP: Saw pod success May 6 23:59:37.561: INFO: Pod "pod-1eb58a8d-ff17-4cfc-a4a9-33266732d14d" satisfied condition "success or failure" May 6 23:59:37.563: INFO: Trying to get logs from node jerma-worker pod pod-1eb58a8d-ff17-4cfc-a4a9-33266732d14d container test-container: STEP: delete the pod May 6 23:59:37.598: INFO: Waiting for pod pod-1eb58a8d-ff17-4cfc-a4a9-33266732d14d to disappear May 6 23:59:37.626: INFO: Pod pod-1eb58a8d-ff17-4cfc-a4a9-33266732d14d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:59:37.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6929" for this suite. • [SLOW TEST:6.793 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3583,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:59:37.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-5d69bce5-f678-47a9-9dcb-f49592e794db STEP: Creating a pod to test consume configMaps May 6 23:59:37.848: INFO: Waiting up to 5m0s for pod "pod-configmaps-e21a841f-8bbc-4c6b-9e4f-e0edf061441d" in namespace "configmap-2745" to be "success or failure" May 6 23:59:37.913: INFO: Pod "pod-configmaps-e21a841f-8bbc-4c6b-9e4f-e0edf061441d": Phase="Pending", Reason="", readiness=false. Elapsed: 65.586088ms May 6 23:59:39.918: INFO: Pod "pod-configmaps-e21a841f-8bbc-4c6b-9e4f-e0edf061441d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070572529s May 6 23:59:42.098: INFO: Pod "pod-configmaps-e21a841f-8bbc-4c6b-9e4f-e0edf061441d": Phase="Running", Reason="", readiness=true. Elapsed: 4.250062125s May 6 23:59:44.101: INFO: Pod "pod-configmaps-e21a841f-8bbc-4c6b-9e4f-e0edf061441d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.253403489s STEP: Saw pod success May 6 23:59:44.101: INFO: Pod "pod-configmaps-e21a841f-8bbc-4c6b-9e4f-e0edf061441d" satisfied condition "success or failure" May 6 23:59:44.104: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-e21a841f-8bbc-4c6b-9e4f-e0edf061441d container configmap-volume-test: STEP: delete the pod May 6 23:59:44.663: INFO: Waiting for pod pod-configmaps-e21a841f-8bbc-4c6b-9e4f-e0edf061441d to disappear May 6 23:59:44.740: INFO: Pod pod-configmaps-e21a841f-8bbc-4c6b-9e4f-e0edf061441d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:59:44.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2745" for this suite. • [SLOW TEST:7.267 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3584,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:59:44.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-3661/secret-test-86993381-0981-4781-bd08-3453b8463782 STEP: Creating a pod to test consume secrets May 6 23:59:45.246: INFO: Waiting up to 5m0s for pod "pod-configmaps-4fa1b24f-8331-4aa2-98d3-c2d1de2aca02" in namespace "secrets-3661" to be "success or failure" May 6 23:59:45.267: INFO: Pod "pod-configmaps-4fa1b24f-8331-4aa2-98d3-c2d1de2aca02": Phase="Pending", Reason="", readiness=false. Elapsed: 21.098075ms May 6 23:59:47.271: INFO: Pod "pod-configmaps-4fa1b24f-8331-4aa2-98d3-c2d1de2aca02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025203421s May 6 23:59:49.319: INFO: Pod "pod-configmaps-4fa1b24f-8331-4aa2-98d3-c2d1de2aca02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072467411s STEP: Saw pod success May 6 23:59:49.319: INFO: Pod "pod-configmaps-4fa1b24f-8331-4aa2-98d3-c2d1de2aca02" satisfied condition "success or failure" May 6 23:59:49.322: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-4fa1b24f-8331-4aa2-98d3-c2d1de2aca02 container env-test: STEP: delete the pod May 6 23:59:49.383: INFO: Waiting for pod pod-configmaps-4fa1b24f-8331-4aa2-98d3-c2d1de2aca02 to disappear May 6 23:59:49.618: INFO: Pod pod-configmaps-4fa1b24f-8331-4aa2-98d3-c2d1de2aca02 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:59:49.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3661" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":224,"skipped":3599,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:59:49.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 6 23:59:49.973: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3224e141-8eef-4df7-a2e0-e4b8434765bd" in namespace "downward-api-7488" to be "success or failure" May 6 23:59:50.199: INFO: Pod "downwardapi-volume-3224e141-8eef-4df7-a2e0-e4b8434765bd": Phase="Pending", Reason="", readiness=false. Elapsed: 226.687369ms May 6 23:59:52.202: INFO: Pod "downwardapi-volume-3224e141-8eef-4df7-a2e0-e4b8434765bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229647324s May 6 23:59:54.229: INFO: Pod "downwardapi-volume-3224e141-8eef-4df7-a2e0-e4b8434765bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.256164627s May 6 23:59:56.233: INFO: Pod "downwardapi-volume-3224e141-8eef-4df7-a2e0-e4b8434765bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.259738336s STEP: Saw pod success May 6 23:59:56.233: INFO: Pod "downwardapi-volume-3224e141-8eef-4df7-a2e0-e4b8434765bd" satisfied condition "success or failure" May 6 23:59:56.235: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3224e141-8eef-4df7-a2e0-e4b8434765bd container client-container: STEP: delete the pod May 6 23:59:56.268: INFO: Waiting for pod downwardapi-volume-3224e141-8eef-4df7-a2e0-e4b8434765bd to disappear May 6 23:59:56.288: INFO: Pod downwardapi-volume-3224e141-8eef-4df7-a2e0-e4b8434765bd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 6 23:59:56.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7488" for this suite. • [SLOW TEST:6.491 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":225,"skipped":3612,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 6 23:59:56.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 6 23:59:57.223: INFO: Pod name wrapped-volume-race-3338cec3-8de6-4b49-800a-36e8e12aca43: Found 0 pods out of 5 May 7 00:00:02.228: INFO: Pod name wrapped-volume-race-3338cec3-8de6-4b49-800a-36e8e12aca43: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-3338cec3-8de6-4b49-800a-36e8e12aca43 in namespace emptydir-wrapper-5382, will wait for the garbage collector to delete the pods May 7 00:00:18.397: INFO: Deleting ReplicationController wrapped-volume-race-3338cec3-8de6-4b49-800a-36e8e12aca43 took: 8.73044ms May 7 00:00:19.098: INFO: Terminating ReplicationController wrapped-volume-race-3338cec3-8de6-4b49-800a-36e8e12aca43 pods took: 700.305332ms STEP: Creating RC which spawns configmap-volume pods May 7 00:00:29.640: INFO: Pod name wrapped-volume-race-f4d65dec-803e-41ef-9b7b-8824e25bced3: Found 0 pods out of 5 May 7 00:00:34.652: INFO: Pod name wrapped-volume-race-f4d65dec-803e-41ef-9b7b-8824e25bced3: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f4d65dec-803e-41ef-9b7b-8824e25bced3 in namespace emptydir-wrapper-5382, will wait for the garbage collector to delete the pods May 7 00:00:49.924: INFO: Deleting ReplicationController wrapped-volume-race-f4d65dec-803e-41ef-9b7b-8824e25bced3 took: 157.678415ms May 7 00:00:50.924: INFO: Terminating ReplicationController wrapped-volume-race-f4d65dec-803e-41ef-9b7b-8824e25bced3 pods took: 1.000286089s STEP: Creating RC which spawns configmap-volume pods May 7 00:01:09.816: INFO: Pod name wrapped-volume-race-ccc4514b-867a-46d8-9f60-190e1743c4ce: Found 0 pods out of 5 May 7 00:01:14.822: INFO: Pod name wrapped-volume-race-ccc4514b-867a-46d8-9f60-190e1743c4ce: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ccc4514b-867a-46d8-9f60-190e1743c4ce in namespace emptydir-wrapper-5382, will wait for the garbage collector to delete the pods May 7 00:01:32.951: INFO: Deleting ReplicationController wrapped-volume-race-ccc4514b-867a-46d8-9f60-190e1743c4ce took: 58.536867ms May 7 00:01:33.351: INFO: Terminating ReplicationController wrapped-volume-race-ccc4514b-867a-46d8-9f60-190e1743c4ce pods took: 400.315489ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:01:50.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5382" for this suite. • [SLOW TEST:114.595 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":226,"skipped":3624,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:01:50.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-j8wtt in namespace proxy-5309 I0507 00:01:51.003058 6 runners.go:189] Created replication controller with name: proxy-service-j8wtt, namespace: proxy-5309, replica count: 1 I0507 00:01:52.053541 6 runners.go:189] proxy-service-j8wtt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 00:01:53.053726 6 runners.go:189] proxy-service-j8wtt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 00:01:54.053941 6 runners.go:189] proxy-service-j8wtt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 00:01:55.054172 6 runners.go:189] proxy-service-j8wtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0507 00:01:56.054370 6 runners.go:189] proxy-service-j8wtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0507 00:01:57.054598 6 runners.go:189] proxy-service-j8wtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0507 00:01:58.054824 6 runners.go:189] proxy-service-j8wtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0507 00:01:59.055065 6 runners.go:189] proxy-service-j8wtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0507 00:02:00.055271 6 runners.go:189] proxy-service-j8wtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0507 00:02:01.055542 6 runners.go:189] proxy-service-j8wtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0507 00:02:02.055755 6 runners.go:189] proxy-service-j8wtt Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 7 00:02:02.075: INFO: setup took 11.137372785s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 7 00:02:02.093: INFO: (0) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:1080/proxy/: ... (200; 17.676552ms) May 7 00:02:02.093: INFO: (0) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw/proxy/: test (200; 18.10054ms) May 7 00:02:02.094: INFO: (0) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:1080/proxy/: test<... (200; 18.608393ms) May 7 00:02:02.094: INFO: (0) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname1/proxy/: foo (200; 18.944755ms) May 7 00:02:02.095: INFO: (0) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname1/proxy/: foo (200; 19.464595ms) May 7 00:02:02.095: INFO: (0) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 20.258881ms) May 7 00:02:02.097: INFO: (0) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 21.538359ms) May 7 00:02:02.098: INFO: (0) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 22.530036ms) May 7 00:02:02.098: INFO: (0) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname2/proxy/: bar (200; 22.979269ms) May 7 00:02:02.098: INFO: (0) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname2/proxy/: bar (200; 23.251569ms) May 7 00:02:02.100: INFO: (0) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 24.551085ms) May 7 00:02:02.105: INFO: (0) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:460/proxy/: tls baz (200; 29.732817ms) May 7 00:02:02.105: INFO: (0) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:462/proxy/: tls qux (200; 29.601932ms) May 7 00:02:02.105: INFO: (0) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname2/proxy/: tls qux (200; 29.563537ms) May 7 00:02:02.105: INFO: (0) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname1/proxy/: tls baz (200; 29.783448ms) May 7 00:02:02.105: INFO: (0) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:443/proxy/: ... (200; 5.19845ms) May 7 00:02:02.111: INFO: (1) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 5.641105ms) May 7 00:02:02.111: INFO: (1) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 5.649199ms) May 7 00:02:02.111: INFO: (1) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw/proxy/: test (200; 5.869482ms) May 7 00:02:02.111: INFO: (1) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:1080/proxy/: test<... (200; 6.087571ms) May 7 00:02:02.111: INFO: (1) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 5.976862ms) May 7 00:02:02.111: INFO: (1) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:443/proxy/: ... (200; 12.858777ms) May 7 00:02:02.129: INFO: (2) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw/proxy/: test (200; 12.891768ms) May 7 00:02:02.129: INFO: (2) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 12.907191ms) May 7 00:02:02.129: INFO: (2) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 12.969783ms) May 7 00:02:02.129: INFO: (2) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 13.031021ms) May 7 00:02:02.129: INFO: (2) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:1080/proxy/: test<... (200; 13.571068ms) May 7 00:02:02.130: INFO: (2) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname1/proxy/: foo (200; 14.435805ms) May 7 00:02:02.130: INFO: (2) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname1/proxy/: foo (200; 14.437547ms) May 7 00:02:02.130: INFO: (2) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname2/proxy/: bar (200; 14.458958ms) May 7 00:02:02.130: INFO: (2) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname1/proxy/: tls baz (200; 14.508384ms) May 7 00:02:02.130: INFO: (2) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname2/proxy/: bar (200; 14.562483ms) May 7 00:02:02.130: INFO: (2) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname2/proxy/: tls qux (200; 14.639053ms) May 7 00:02:02.172: INFO: (3) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 41.75435ms) May 7 00:02:02.172: INFO: (3) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw/proxy/: test (200; 41.753767ms) May 7 00:02:02.172: INFO: (3) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:460/proxy/: tls baz (200; 41.926167ms) May 7 00:02:02.172: INFO: (3) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 41.999977ms) May 7 00:02:02.173: INFO: (3) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:462/proxy/: tls qux (200; 42.18011ms) May 7 00:02:02.173: INFO: (3) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:1080/proxy/: test<... (200; 42.395446ms) May 7 00:02:02.173: INFO: (3) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 42.287278ms) May 7 00:02:02.173: INFO: (3) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 42.324026ms) May 7 00:02:02.173: INFO: (3) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:443/proxy/: ... (200; 44.436806ms) May 7 00:02:02.175: INFO: (3) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname2/proxy/: tls qux (200; 44.491274ms) May 7 00:02:02.175: INFO: (3) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname1/proxy/: foo (200; 44.410398ms) May 7 00:02:02.175: INFO: (3) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname2/proxy/: bar (200; 44.41728ms) May 7 00:02:02.175: INFO: (3) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname1/proxy/: tls baz (200; 44.381167ms) May 7 00:02:02.196: INFO: (4) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 21.006715ms) May 7 00:02:02.197: INFO: (4) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:460/proxy/: tls baz (200; 21.757907ms) May 7 00:02:02.197: INFO: (4) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname1/proxy/: tls baz (200; 22.070854ms) May 7 00:02:02.197: INFO: (4) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw/proxy/: test (200; 22.04029ms) May 7 00:02:02.197: INFO: (4) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:443/proxy/: test<... (200; 22.973921ms) May 7 00:02:02.198: INFO: (4) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname2/proxy/: bar (200; 23.076349ms) May 7 00:02:02.198: INFO: (4) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname1/proxy/: foo (200; 23.176226ms) May 7 00:02:02.198: INFO: (4) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 23.24442ms) May 7 00:02:02.198: INFO: (4) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname1/proxy/: foo (200; 23.166744ms) May 7 00:02:02.198: INFO: (4) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 23.210507ms) May 7 00:02:02.198: INFO: (4) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:462/proxy/: tls qux (200; 23.194279ms) May 7 00:02:02.198: INFO: (4) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname2/proxy/: bar (200; 23.332651ms) May 7 00:02:02.198: INFO: (4) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 23.365729ms) May 7 00:02:02.198: INFO: (4) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:1080/proxy/: ... (200; 23.520952ms) May 7 00:02:02.206: INFO: (5) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw/proxy/: test (200; 7.077843ms) May 7 00:02:02.206: INFO: (5) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:1080/proxy/: ... (200; 5.853593ms) May 7 00:02:02.206: INFO: (5) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 6.345843ms) May 7 00:02:02.206: INFO: (5) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:443/proxy/: test<... (200; 7.520494ms) May 7 00:02:02.207: INFO: (5) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 7.026993ms) May 7 00:02:02.207: INFO: (5) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 7.175388ms) May 7 00:02:02.207: INFO: (5) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:460/proxy/: tls baz (200; 8.4808ms) May 7 00:02:02.207: INFO: (5) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 8.050096ms) May 7 00:02:02.207: INFO: (5) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:462/proxy/: tls qux (200; 7.954714ms) May 7 00:02:02.207: INFO: (5) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname1/proxy/: foo (200; 7.068848ms) May 7 00:02:02.208: INFO: (5) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname2/proxy/: bar (200; 7.140331ms) May 7 00:02:02.208: INFO: (5) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname1/proxy/: tls baz (200; 8.118733ms) May 7 00:02:02.208: INFO: (5) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname2/proxy/: bar (200; 8.698604ms) May 7 00:02:02.211: INFO: (5) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname2/proxy/: tls qux (200; 12.311228ms) May 7 00:02:02.211: INFO: (5) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname1/proxy/: foo (200; 12.498064ms) May 7 00:02:02.224: INFO: (6) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:1080/proxy/: ... (200; 12.629308ms) May 7 00:02:02.224: INFO: (6) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 12.654387ms) May 7 00:02:02.224: INFO: (6) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 12.713768ms) May 7 00:02:02.224: INFO: (6) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:443/proxy/: test (200; 12.901172ms) May 7 00:02:02.224: INFO: (6) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 13.043994ms) May 7 00:02:02.225: INFO: (6) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:460/proxy/: tls baz (200; 13.52107ms) May 7 00:02:02.225: INFO: (6) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:462/proxy/: tls qux (200; 13.762044ms) May 7 00:02:02.225: INFO: (6) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 13.919801ms) May 7 00:02:02.225: INFO: (6) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:1080/proxy/: test<... (200; 13.957042ms) May 7 00:02:02.226: INFO: (6) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname1/proxy/: foo (200; 14.203867ms) May 7 00:02:02.226: INFO: (6) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname2/proxy/: bar (200; 14.318535ms) May 7 00:02:02.226: INFO: (6) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname1/proxy/: tls baz (200; 14.555259ms) May 7 00:02:02.226: INFO: (6) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname2/proxy/: tls qux (200; 14.807303ms) May 7 00:02:02.226: INFO: (6) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname2/proxy/: bar (200; 14.865362ms) May 7 00:02:02.226: INFO: (6) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname1/proxy/: foo (200; 14.759522ms) May 7 00:02:02.242: INFO: (7) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 15.53878ms) May 7 00:02:02.242: INFO: (7) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 15.640762ms) May 7 00:02:02.242: INFO: (7) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:460/proxy/: tls baz (200; 16.080066ms) May 7 00:02:02.242: INFO: (7) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 15.997747ms) May 7 00:02:02.244: INFO: (7) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname1/proxy/: foo (200; 17.797993ms) May 7 00:02:02.244: INFO: (7) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw/proxy/: test (200; 17.830427ms) May 7 00:02:02.244: INFO: (7) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:462/proxy/: tls qux (200; 17.833894ms) May 7 00:02:02.244: INFO: (7) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:1080/proxy/: ... (200; 17.949126ms) May 7 00:02:02.244: INFO: (7) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname2/proxy/: bar (200; 17.921926ms) May 7 00:02:02.244: INFO: (7) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 17.956672ms) May 7 00:02:02.244: INFO: (7) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname2/proxy/: bar (200; 18.067852ms) May 7 00:02:02.244: INFO: (7) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname2/proxy/: tls qux (200; 17.977977ms) May 7 00:02:02.245: INFO: (7) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname1/proxy/: tls baz (200; 18.170155ms) May 7 00:02:02.245: INFO: (7) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:1080/proxy/: test<... (200; 18.272281ms) May 7 00:02:02.245: INFO: (7) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:443/proxy/: ... (200; 13.646637ms) May 7 00:02:02.260: INFO: (8) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 14.185887ms) May 7 00:02:02.260: INFO: (8) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 14.115009ms) May 7 00:02:02.261: INFO: (8) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 15.073327ms) May 7 00:02:02.261: INFO: (8) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:1080/proxy/: test<... (200; 15.559707ms) May 7 00:02:02.262: INFO: (8) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 16.641323ms) May 7 00:02:02.262: INFO: (8) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:460/proxy/: tls baz (200; 17.028695ms) May 7 00:02:02.262: INFO: (8) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw/proxy/: test (200; 16.503736ms) May 7 00:02:02.262: INFO: (8) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:462/proxy/: tls qux (200; 16.673203ms) May 7 00:02:02.262: INFO: (8) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:443/proxy/: test<... (200; 43.350363ms) May 7 00:02:02.310: INFO: (9) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw/proxy/: test (200; 43.353186ms) May 7 00:02:02.311: INFO: (9) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 43.509113ms) May 7 00:02:02.311: INFO: (9) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:460/proxy/: tls baz (200; 43.541057ms) May 7 00:02:02.311: INFO: (9) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:1080/proxy/: ... (200; 43.488051ms) May 7 00:02:02.311: INFO: (9) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 43.476977ms) May 7 00:02:02.311: INFO: (9) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 43.57347ms) May 7 00:02:02.311: INFO: (9) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:462/proxy/: tls qux (200; 43.767186ms) May 7 00:02:02.311: INFO: (9) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname1/proxy/: foo (200; 43.625818ms) May 7 00:02:02.312: INFO: (9) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname2/proxy/: bar (200; 44.895064ms) May 7 00:02:02.312: INFO: (9) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname1/proxy/: foo (200; 45.224735ms) May 7 00:02:02.312: INFO: (9) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname2/proxy/: bar (200; 45.234007ms) May 7 00:02:02.312: INFO: (9) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname2/proxy/: tls qux (200; 45.27427ms) May 7 00:02:02.312: INFO: (9) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname1/proxy/: tls baz (200; 45.249713ms) May 7 00:02:02.320: INFO: (10) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 7.426439ms) May 7 00:02:02.320: INFO: (10) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 7.404584ms) May 7 00:02:02.320: INFO: (10) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:460/proxy/: tls baz (200; 7.186235ms) May 7 00:02:02.320: INFO: (10) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:462/proxy/: tls qux (200; 7.960096ms) May 7 00:02:02.321: INFO: (10) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:443/proxy/: test<... (200; 8.806736ms) May 7 00:02:02.322: INFO: (10) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw/proxy/: test (200; 9.164803ms) May 7 00:02:02.322: INFO: (10) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:1080/proxy/: ... (200; 8.766825ms) May 7 00:02:02.322: INFO: (10) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname2/proxy/: bar (200; 8.773431ms) May 7 00:02:02.322: INFO: (10) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 9.139064ms) May 7 00:02:02.322: INFO: (10) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname2/proxy/: tls qux (200; 9.599419ms) May 7 00:02:02.322: INFO: (10) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname1/proxy/: tls baz (200; 9.102012ms) May 7 00:02:02.322: INFO: (10) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname1/proxy/: foo (200; 9.07085ms) May 7 00:02:02.335: INFO: (10) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname1/proxy/: foo (200; 22.83678ms) May 7 00:02:02.336: INFO: (10) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname2/proxy/: bar (200; 23.095376ms) May 7 00:02:02.363: INFO: (11) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:462/proxy/: tls qux (200; 26.86049ms) May 7 00:02:02.363: INFO: (11) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:460/proxy/: tls baz (200; 26.916093ms) May 7 00:02:02.363: INFO: (11) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw/proxy/: test (200; 26.971962ms) May 7 00:02:02.363: INFO: (11) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:443/proxy/: ... (200; 27.305436ms) May 7 00:02:02.363: INFO: (11) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 27.358537ms) May 7 00:02:02.363: INFO: (11) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 27.480956ms) May 7 00:02:02.363: INFO: (11) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:1080/proxy/: test<... (200; 27.465344ms) May 7 00:02:02.364: INFO: (11) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname1/proxy/: foo (200; 28.456466ms) May 7 00:02:02.365: INFO: (11) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname2/proxy/: bar (200; 28.892015ms) May 7 00:02:02.365: INFO: (11) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname1/proxy/: tls baz (200; 28.951503ms) May 7 00:02:02.365: INFO: (11) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname2/proxy/: bar (200; 29.298934ms) May 7 00:02:02.379: INFO: (11) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname2/proxy/: tls qux (200; 43.612497ms) May 7 00:02:02.404: INFO: (12) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 24.389201ms) May 7 00:02:02.404: INFO: (12) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:1080/proxy/: ... (200; 24.672624ms) May 7 00:02:02.404: INFO: (12) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 24.900263ms) May 7 00:02:02.404: INFO: (12) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 24.859506ms) May 7 00:02:02.404: INFO: (12) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:462/proxy/: tls qux (200; 25.096544ms) May 7 00:02:02.404: INFO: (12) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw/proxy/: test (200; 25.053842ms) May 7 00:02:02.405: INFO: (12) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:443/proxy/: test<... (200; 27.561772ms) May 7 00:02:02.466: INFO: (13) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 58.354475ms) May 7 00:02:02.466: INFO: (13) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 58.460573ms) May 7 00:02:02.466: INFO: (13) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 58.475614ms) May 7 00:02:02.466: INFO: (13) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:443/proxy/: test (200; 58.661139ms) May 7 00:02:02.466: INFO: (13) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:1080/proxy/: ... (200; 58.613794ms) May 7 00:02:02.466: INFO: (13) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 58.778582ms) May 7 00:02:02.466: INFO: (13) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:1080/proxy/: test<... (200; 58.692458ms) May 7 00:02:02.466: INFO: (13) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:460/proxy/: tls baz (200; 58.847898ms) May 7 00:02:02.468: INFO: (13) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname1/proxy/: foo (200; 60.924374ms) May 7 00:02:02.468: INFO: (13) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname2/proxy/: bar (200; 60.989271ms) May 7 00:02:02.468: INFO: (13) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname2/proxy/: bar (200; 61.409271ms) May 7 00:02:02.468: INFO: (13) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname2/proxy/: tls qux (200; 61.425496ms) May 7 00:02:02.469: INFO: (13) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname1/proxy/: foo (200; 61.384821ms) May 7 00:02:02.469: INFO: (13) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname1/proxy/: tls baz (200; 61.291483ms) May 7 00:02:02.481: INFO: (14) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 12.397871ms) May 7 00:02:02.482: INFO: (14) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:462/proxy/: tls qux (200; 12.669412ms) May 7 00:02:02.482: INFO: (14) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:1080/proxy/: test<... (200; 12.603289ms) May 7 00:02:02.482: INFO: (14) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw/proxy/: test (200; 12.622156ms) May 7 00:02:02.482: INFO: (14) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:1080/proxy/: ... (200; 12.47743ms) May 7 00:02:02.483: INFO: (14) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:443/proxy/: test<... (200; 4.433537ms) May 7 00:02:02.488: INFO: (15) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname2/proxy/: bar (200; 4.518999ms) May 7 00:02:02.488: INFO: (15) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname1/proxy/: foo (200; 4.526324ms) May 7 00:02:02.489: INFO: (15) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw/proxy/: test (200; 4.606437ms) May 7 00:02:02.489: INFO: (15) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:460/proxy/: tls baz (200; 5.125517ms) May 7 00:02:02.489: INFO: (15) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:1080/proxy/: ... (200; 5.126927ms) May 7 00:02:02.489: INFO: (15) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 5.231226ms) May 7 00:02:02.489: INFO: (15) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:462/proxy/: tls qux (200; 5.145979ms) May 7 00:02:02.489: INFO: (15) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 5.277132ms) May 7 00:02:02.489: INFO: (15) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:443/proxy/: test (200; 20.289311ms) May 7 00:02:02.514: INFO: (16) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 20.298303ms) May 7 00:02:02.514: INFO: (16) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:443/proxy/: test<... (200; 20.684466ms) May 7 00:02:02.516: INFO: (16) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:462/proxy/: tls qux (200; 22.283353ms) May 7 00:02:02.516: INFO: (16) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 22.380119ms) May 7 00:02:02.516: INFO: (16) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 22.444438ms) May 7 00:02:02.516: INFO: (16) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname1/proxy/: tls baz (200; 22.736767ms) May 7 00:02:02.516: INFO: (16) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname2/proxy/: bar (200; 22.777606ms) May 7 00:02:02.516: INFO: (16) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname1/proxy/: foo (200; 22.782194ms) May 7 00:02:02.517: INFO: (16) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:1080/proxy/: ... (200; 23.235672ms) May 7 00:02:02.517: INFO: (16) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname2/proxy/: tls qux (200; 23.279244ms) May 7 00:02:02.517: INFO: (16) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname1/proxy/: foo (200; 23.230094ms) May 7 00:02:02.517: INFO: (16) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname2/proxy/: bar (200; 23.256149ms) May 7 00:02:02.524: INFO: (17) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:1080/proxy/: ... (200; 5.735722ms) May 7 00:02:02.524: INFO: (17) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 5.33741ms) May 7 00:02:02.524: INFO: (17) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw/proxy/: test (200; 5.124408ms) May 7 00:02:02.524: INFO: (17) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:462/proxy/: tls qux (200; 5.47169ms) May 7 00:02:02.524: INFO: (17) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:460/proxy/: tls baz (200; 6.031341ms) May 7 00:02:02.524: INFO: (17) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:1080/proxy/: test<... (200; 7.066477ms) May 7 00:02:02.524: INFO: (17) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 6.670247ms) May 7 00:02:02.524: INFO: (17) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname1/proxy/: foo (200; 6.438957ms) May 7 00:02:02.524: INFO: (17) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 6.600388ms) May 7 00:02:02.524: INFO: (17) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:443/proxy/: ... (200; 6.649949ms) May 7 00:02:02.536: INFO: (18) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 6.998866ms) May 7 00:02:02.536: INFO: (18) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:1080/proxy/: test<... (200; 7.172692ms) May 7 00:02:02.536: INFO: (18) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 7.309887ms) May 7 00:02:02.536: INFO: (18) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:443/proxy/: test (200; 8.798119ms) May 7 00:02:02.541: INFO: (18) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname2/proxy/: bar (200; 12.328249ms) May 7 00:02:02.541: INFO: (18) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname1/proxy/: tls baz (200; 12.551299ms) May 7 00:02:02.553: INFO: (19) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 11.66899ms) May 7 00:02:02.553: INFO: (19) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw/proxy/: test (200; 11.924726ms) May 7 00:02:02.554: INFO: (19) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 12.203825ms) May 7 00:02:02.554: INFO: (19) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:160/proxy/: foo (200; 12.772514ms) May 7 00:02:02.554: INFO: (19) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:443/proxy/: test<... (200; 12.77419ms) May 7 00:02:02.555: INFO: (19) /api/v1/namespaces/proxy-5309/pods/proxy-service-j8wtt-f4srw:162/proxy/: bar (200; 13.120036ms) May 7 00:02:02.555: INFO: (19) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname1/proxy/: foo (200; 13.218514ms) May 7 00:02:02.555: INFO: (19) /api/v1/namespaces/proxy-5309/services/proxy-service-j8wtt:portname2/proxy/: bar (200; 12.991368ms) May 7 00:02:02.555: INFO: (19) /api/v1/namespaces/proxy-5309/pods/http:proxy-service-j8wtt-f4srw:1080/proxy/: ... (200; 13.619035ms) May 7 00:02:02.555: INFO: (19) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:462/proxy/: tls qux (200; 13.761963ms) May 7 00:02:02.556: INFO: (19) /api/v1/namespaces/proxy-5309/pods/https:proxy-service-j8wtt-f4srw:460/proxy/: tls baz (200; 14.225452ms) May 7 00:02:02.558: INFO: (19) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname2/proxy/: tls qux (200; 17.041795ms) May 7 00:02:02.603: INFO: (19) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname1/proxy/: foo (200; 61.722326ms) May 7 00:02:02.603: INFO: (19) /api/v1/namespaces/proxy-5309/services/https:proxy-service-j8wtt:tlsportname1/proxy/: tls baz (200; 61.695024ms) May 7 00:02:02.603: INFO: (19) /api/v1/namespaces/proxy-5309/services/http:proxy-service-j8wtt:portname2/proxy/: bar (200; 62.099811ms) STEP: deleting ReplicationController proxy-service-j8wtt in namespace proxy-5309, will wait for the garbage collector to delete the pods May 7 00:02:02.676: INFO: Deleting ReplicationController proxy-service-j8wtt took: 6.647602ms May 7 00:02:02.976: INFO: Terminating ReplicationController proxy-service-j8wtt pods took: 300.267229ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:02:05.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5309" for this suite. • [SLOW TEST:14.697 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":227,"skipped":3633,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:02:05.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-6952a88b-c8b6-414c-bff9-e3a9a65706bf STEP: Creating secret with name secret-projected-all-test-volume-f80198aa-1c76-4d39-989c-c1986ac3a96a STEP: Creating a pod to test Check all projections for projected volume plugin May 7 00:02:05.896: INFO: Waiting up to 5m0s for pod "projected-volume-7ebb3454-7e64-4559-80b9-f899ade210f6" in namespace "projected-2837" to be "success or failure" May 7 00:02:05.908: INFO: Pod "projected-volume-7ebb3454-7e64-4559-80b9-f899ade210f6": Phase="Pending", Reason="", readiness=false. Elapsed: 12.20578ms May 7 00:02:07.912: INFO: Pod "projected-volume-7ebb3454-7e64-4559-80b9-f899ade210f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016163635s May 7 00:02:09.915: INFO: Pod "projected-volume-7ebb3454-7e64-4559-80b9-f899ade210f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019363625s STEP: Saw pod success May 7 00:02:09.915: INFO: Pod "projected-volume-7ebb3454-7e64-4559-80b9-f899ade210f6" satisfied condition "success or failure" May 7 00:02:09.918: INFO: Trying to get logs from node jerma-worker pod projected-volume-7ebb3454-7e64-4559-80b9-f899ade210f6 container projected-all-volume-test: STEP: delete the pod May 7 00:02:09.952: INFO: Waiting for pod projected-volume-7ebb3454-7e64-4559-80b9-f899ade210f6 to disappear May 7 00:02:10.141: INFO: Pod projected-volume-7ebb3454-7e64-4559-80b9-f899ade210f6 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:02:10.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2837" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3639,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:02:10.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-df068b72-72be-4c57-96df-515e265fb986 STEP: Creating a pod to test consume secrets May 7 00:02:10.665: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-53922cac-5f7f-4299-a6bd-19f74e13ff38" in namespace "projected-7431" to be "success or failure" May 7 00:02:10.831: INFO: Pod "pod-projected-secrets-53922cac-5f7f-4299-a6bd-19f74e13ff38": Phase="Pending", Reason="", readiness=false. Elapsed: 165.723298ms May 7 00:02:12.836: INFO: Pod "pod-projected-secrets-53922cac-5f7f-4299-a6bd-19f74e13ff38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.17018829s May 7 00:02:14.839: INFO: Pod "pod-projected-secrets-53922cac-5f7f-4299-a6bd-19f74e13ff38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.173557474s STEP: Saw pod success May 7 00:02:14.839: INFO: Pod "pod-projected-secrets-53922cac-5f7f-4299-a6bd-19f74e13ff38" satisfied condition "success or failure" May 7 00:02:14.841: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-53922cac-5f7f-4299-a6bd-19f74e13ff38 container projected-secret-volume-test: STEP: delete the pod May 7 00:02:14.880: INFO: Waiting for pod pod-projected-secrets-53922cac-5f7f-4299-a6bd-19f74e13ff38 to disappear May 7 00:02:14.896: INFO: Pod pod-projected-secrets-53922cac-5f7f-4299-a6bd-19f74e13ff38 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:02:14.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7431" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3652,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:02:14.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-840 STEP: creating a selector STEP: Creating the service pods in kubernetes May 7 00:02:14.968: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 7 00:02:45.524: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.198:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-840 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:02:45.524: INFO: >>> kubeConfig: /root/.kube/config I0507 00:02:45.559805 6 log.go:172] (0xc0027c9b80) (0xc000ca90e0) Create stream I0507 00:02:45.559828 6 log.go:172] (0xc0027c9b80) (0xc000ca90e0) Stream added, broadcasting: 1 I0507 00:02:45.561814 6 log.go:172] (0xc0027c9b80) Reply frame received for 1 I0507 00:02:45.561864 6 log.go:172] (0xc0027c9b80) (0xc000cea500) Create stream I0507 00:02:45.561884 6 log.go:172] (0xc0027c9b80) (0xc000cea500) Stream added, broadcasting: 3 I0507 00:02:45.562931 6 log.go:172] (0xc0027c9b80) Reply frame received for 3 I0507 00:02:45.562952 6 log.go:172] (0xc0027c9b80) (0xc00122c000) Create stream I0507 00:02:45.562958 6 log.go:172] (0xc0027c9b80) (0xc00122c000) Stream added, broadcasting: 5 I0507 00:02:45.564216 6 log.go:172] (0xc0027c9b80) Reply frame received for 5 I0507 00:02:45.639937 6 log.go:172] (0xc0027c9b80) Data frame received for 5 I0507 00:02:45.639990 6 log.go:172] (0xc00122c000) (5) Data frame handling I0507 00:02:45.640017 6 log.go:172] (0xc0027c9b80) Data frame received for 3 I0507 00:02:45.640034 6 log.go:172] (0xc000cea500) (3) Data frame handling I0507 00:02:45.640056 6 log.go:172] (0xc000cea500) (3) Data frame sent I0507 00:02:45.640067 6 log.go:172] (0xc0027c9b80) Data frame received for 3 I0507 00:02:45.640097 6 log.go:172] (0xc000cea500) (3) Data frame handling I0507 00:02:45.641767 6 log.go:172] (0xc0027c9b80) Data frame received for 1 I0507 00:02:45.641786 6 log.go:172] (0xc000ca90e0) (1) Data frame handling I0507 00:02:45.641795 6 log.go:172] (0xc000ca90e0) (1) Data frame sent I0507 00:02:45.641806 6 log.go:172] (0xc0027c9b80) (0xc000ca90e0) Stream removed, broadcasting: 1 I0507 00:02:45.641843 6 log.go:172] (0xc0027c9b80) Go away received I0507 00:02:45.641889 6 log.go:172] (0xc0027c9b80) (0xc000ca90e0) Stream removed, broadcasting: 1 I0507 00:02:45.641911 6 log.go:172] (0xc0027c9b80) (0xc000cea500) Stream removed, broadcasting: 3 I0507 00:02:45.641924 6 log.go:172] (0xc0027c9b80) (0xc00122c000) Stream removed, broadcasting: 5 May 7 00:02:45.641: INFO: Found all expected endpoints: [netserver-0] May 7 00:02:45.644: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.101:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-840 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:02:45.644: INFO: >>> kubeConfig: /root/.kube/config I0507 00:02:45.676040 6 log.go:172] (0xc001d12420) (0xc000ceac80) Create stream I0507 00:02:45.676072 6 log.go:172] (0xc001d12420) (0xc000ceac80) Stream added, broadcasting: 1 I0507 00:02:45.678606 6 log.go:172] (0xc001d12420) Reply frame received for 1 I0507 00:02:45.678652 6 log.go:172] (0xc001d12420) (0xc000cead20) Create stream I0507 00:02:45.678665 6 log.go:172] (0xc001d12420) (0xc000cead20) Stream added, broadcasting: 3 I0507 00:02:45.679636 6 log.go:172] (0xc001d12420) Reply frame received for 3 I0507 00:02:45.679671 6 log.go:172] (0xc001d12420) (0xc001eae960) Create stream I0507 00:02:45.679680 6 log.go:172] (0xc001d12420) (0xc001eae960) Stream added, broadcasting: 5 I0507 00:02:45.680673 6 log.go:172] (0xc001d12420) Reply frame received for 5 I0507 00:02:45.769104 6 log.go:172] (0xc001d12420) Data frame received for 3 I0507 00:02:45.769364 6 log.go:172] (0xc000cead20) (3) Data frame handling I0507 00:02:45.769395 6 log.go:172] (0xc000cead20) (3) Data frame sent I0507 00:02:45.769472 6 log.go:172] (0xc001d12420) Data frame received for 3 I0507 00:02:45.769518 6 log.go:172] (0xc000cead20) (3) Data frame handling I0507 00:02:45.769545 6 log.go:172] (0xc001d12420) Data frame received for 5 I0507 00:02:45.769559 6 log.go:172] (0xc001eae960) (5) Data frame handling I0507 00:02:45.771863 6 log.go:172] (0xc001d12420) Data frame received for 1 I0507 00:02:45.771891 6 log.go:172] (0xc000ceac80) (1) Data frame handling I0507 00:02:45.771923 6 log.go:172] (0xc000ceac80) (1) Data frame sent I0507 00:02:45.771951 6 log.go:172] (0xc001d12420) (0xc000ceac80) Stream removed, broadcasting: 1 I0507 00:02:45.772076 6 log.go:172] (0xc001d12420) (0xc000ceac80) Stream removed, broadcasting: 1 I0507 00:02:45.772097 6 log.go:172] (0xc001d12420) (0xc000cead20) Stream removed, broadcasting: 3 I0507 00:02:45.772128 6 log.go:172] (0xc001d12420) (0xc001eae960) Stream removed, broadcasting: 5 May 7 00:02:45.772: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 I0507 00:02:45.772211 6 log.go:172] (0xc001d12420) Go away received May 7 00:02:45.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-840" for this suite. • [SLOW TEST:30.879 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3695,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:02:45.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-58c6d72c-2595-49ed-bf8d-0a831b518ebc in namespace container-probe-7845 May 7 00:02:49.871: INFO: Started pod test-webserver-58c6d72c-2595-49ed-bf8d-0a831b518ebc in namespace container-probe-7845 STEP: checking the pod's current state and verifying that restartCount is present May 7 00:02:49.874: INFO: Initial restart count of pod test-webserver-58c6d72c-2595-49ed-bf8d-0a831b518ebc is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:06:50.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7845" for this suite. • [SLOW TEST:245.012 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3750,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:06:50.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server May 7 00:06:50.839: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:06:50.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-309" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":232,"skipped":3761,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:06:51.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 7 00:06:56.623: INFO: Successfully updated pod "annotationupdate035be239-169b-4369-9894-952c7995a8b9" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:07:00.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2252" for this suite. • [SLOW TEST:9.434 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":233,"skipped":3806,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:07:00.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 7 00:07:01.493: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 7 00:07:03.503: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406821, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406821, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406821, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406821, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:07:05.507: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406821, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406821, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406821, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724406821, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 00:07:08.550: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 7 00:07:08.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-424-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:07:09.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6649" for this suite. STEP: Destroying namespace "webhook-6649-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.136 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":234,"skipped":3818,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:07:09.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:07:13.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6978" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3825,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:07:13.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 7 00:07:18.243: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:07:18.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6552" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3868,"failed":0} S ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:07:18.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod May 7 00:07:22.535: INFO: Pod pod-hostip-70047e64-b141-499f-8f36-4a2c346c54b3 has hostIP: 172.17.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:07:22.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-884" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3869,"failed":0} SSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:07:22.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-7914b009-f3f4-45ab-b18b-dd10d6b7bcbb STEP: Creating a pod to test consume secrets May 7 00:07:23.084: INFO: Waiting up to 5m0s for pod "pod-secrets-d08f9c14-c822-4e19-bdfe-3f89288e1fc8" in namespace "secrets-7123" to be "success or failure" May 7 00:07:23.231: INFO: Pod "pod-secrets-d08f9c14-c822-4e19-bdfe-3f89288e1fc8": Phase="Pending", Reason="", readiness=false. Elapsed: 146.540322ms May 7 00:07:25.463: INFO: Pod "pod-secrets-d08f9c14-c822-4e19-bdfe-3f89288e1fc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.378973296s May 7 00:07:27.467: INFO: Pod "pod-secrets-d08f9c14-c822-4e19-bdfe-3f89288e1fc8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.382642233s May 7 00:07:29.847: INFO: Pod "pod-secrets-d08f9c14-c822-4e19-bdfe-3f89288e1fc8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.763033847s May 7 00:07:32.040: INFO: Pod "pod-secrets-d08f9c14-c822-4e19-bdfe-3f89288e1fc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.95585597s STEP: Saw pod success May 7 00:07:32.040: INFO: Pod "pod-secrets-d08f9c14-c822-4e19-bdfe-3f89288e1fc8" satisfied condition "success or failure" May 7 00:07:32.043: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-d08f9c14-c822-4e19-bdfe-3f89288e1fc8 container secret-volume-test: STEP: delete the pod May 7 00:07:32.735: INFO: Waiting for pod pod-secrets-d08f9c14-c822-4e19-bdfe-3f89288e1fc8 to disappear May 7 00:07:33.176: INFO: Pod pod-secrets-d08f9c14-c822-4e19-bdfe-3f89288e1fc8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:07:33.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7123" for this suite. STEP: Destroying namespace "secret-namespace-8973" for this suite. • [SLOW TEST:11.000 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3873,"failed":0} SSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:07:33.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 7 00:07:48.839: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2008 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:07:48.839: INFO: >>> kubeConfig: /root/.kube/config I0507 00:07:48.861770 6 log.go:172] (0xc001d12dc0) (0xc00134f360) Create stream I0507 00:07:48.861801 6 log.go:172] (0xc001d12dc0) (0xc00134f360) Stream added, broadcasting: 1 I0507 00:07:48.863407 6 log.go:172] (0xc001d12dc0) Reply frame received for 1 I0507 00:07:48.863435 6 log.go:172] (0xc001d12dc0) (0xc001eafa40) Create stream I0507 00:07:48.863446 6 log.go:172] (0xc001d12dc0) (0xc001eafa40) Stream added, broadcasting: 3 I0507 00:07:48.864356 6 log.go:172] (0xc001d12dc0) Reply frame received for 3 I0507 00:07:48.864409 6 log.go:172] (0xc001d12dc0) (0xc001eafc20) Create stream I0507 00:07:48.864432 6 log.go:172] (0xc001d12dc0) (0xc001eafc20) Stream added, broadcasting: 5 I0507 00:07:48.865458 6 log.go:172] (0xc001d12dc0) Reply frame received for 5 I0507 00:07:48.946092 6 log.go:172] (0xc001d12dc0) Data frame received for 5 I0507 00:07:48.946125 6 log.go:172] (0xc001eafc20) (5) Data frame handling I0507 00:07:48.946148 6 log.go:172] (0xc001d12dc0) Data frame received for 3 I0507 00:07:48.946163 6 log.go:172] (0xc001eafa40) (3) Data frame handling I0507 00:07:48.946179 6 log.go:172] (0xc001eafa40) (3) Data frame sent I0507 00:07:48.946192 6 log.go:172] (0xc001d12dc0) Data frame received for 3 I0507 00:07:48.946202 6 log.go:172] (0xc001eafa40) (3) Data frame handling I0507 00:07:48.949105 6 log.go:172] (0xc001d12dc0) Data frame received for 1 I0507 00:07:48.949322 6 log.go:172] (0xc00134f360) (1) Data frame handling I0507 00:07:48.949373 6 log.go:172] (0xc00134f360) (1) Data frame sent I0507 00:07:48.949422 6 log.go:172] (0xc001d12dc0) (0xc00134f360) Stream removed, broadcasting: 1 I0507 00:07:48.949506 6 log.go:172] (0xc001d12dc0) Go away received I0507 00:07:48.949551 6 log.go:172] (0xc001d12dc0) (0xc00134f360) Stream removed, broadcasting: 1 I0507 00:07:48.949575 6 log.go:172] (0xc001d12dc0) (0xc001eafa40) Stream removed, broadcasting: 3 I0507 00:07:48.949590 6 log.go:172] (0xc001d12dc0) (0xc001eafc20) Stream removed, broadcasting: 5 May 7 00:07:48.949: INFO: Exec stderr: "" May 7 00:07:48.949: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2008 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:07:48.949: INFO: >>> kubeConfig: /root/.kube/config I0507 00:07:48.979550 6 log.go:172] (0xc000ea44d0) (0xc001eaff40) Create stream I0507 00:07:48.979586 6 log.go:172] (0xc000ea44d0) (0xc001eaff40) Stream added, broadcasting: 1 I0507 00:07:48.981594 6 log.go:172] (0xc000ea44d0) Reply frame received for 1 I0507 00:07:48.981623 6 log.go:172] (0xc000ea44d0) (0xc0012601e0) Create stream I0507 00:07:48.981637 6 log.go:172] (0xc000ea44d0) (0xc0012601e0) Stream added, broadcasting: 3 I0507 00:07:48.982288 6 log.go:172] (0xc000ea44d0) Reply frame received for 3 I0507 00:07:48.982319 6 log.go:172] (0xc000ea44d0) (0xc001da2320) Create stream I0507 00:07:48.982334 6 log.go:172] (0xc000ea44d0) (0xc001da2320) Stream added, broadcasting: 5 I0507 00:07:48.983134 6 log.go:172] (0xc000ea44d0) Reply frame received for 5 I0507 00:07:49.040299 6 log.go:172] (0xc000ea44d0) Data frame received for 5 I0507 00:07:49.040372 6 log.go:172] (0xc001da2320) (5) Data frame handling I0507 00:07:49.040415 6 log.go:172] (0xc000ea44d0) Data frame received for 3 I0507 00:07:49.040440 6 log.go:172] (0xc0012601e0) (3) Data frame handling I0507 00:07:49.040466 6 log.go:172] (0xc0012601e0) (3) Data frame sent I0507 00:07:49.040480 6 log.go:172] (0xc000ea44d0) Data frame received for 3 I0507 00:07:49.040493 6 log.go:172] (0xc0012601e0) (3) Data frame handling I0507 00:07:49.042081 6 log.go:172] (0xc000ea44d0) Data frame received for 1 I0507 00:07:49.042111 6 log.go:172] (0xc001eaff40) (1) Data frame handling I0507 00:07:49.042165 6 log.go:172] (0xc001eaff40) (1) Data frame sent I0507 00:07:49.042184 6 log.go:172] (0xc000ea44d0) (0xc001eaff40) Stream removed, broadcasting: 1 I0507 00:07:49.042203 6 log.go:172] (0xc000ea44d0) Go away received I0507 00:07:49.042344 6 log.go:172] (0xc000ea44d0) (0xc001eaff40) Stream removed, broadcasting: 1 I0507 00:07:49.042375 6 log.go:172] (0xc000ea44d0) (0xc0012601e0) Stream removed, broadcasting: 3 I0507 00:07:49.042398 6 log.go:172] (0xc000ea44d0) (0xc001da2320) Stream removed, broadcasting: 5 May 7 00:07:49.042: INFO: Exec stderr: "" May 7 00:07:49.042: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2008 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:07:49.042: INFO: >>> kubeConfig: /root/.kube/config I0507 00:07:49.068715 6 log.go:172] (0xc0016a09a0) (0xc002327b80) Create stream I0507 00:07:49.068747 6 log.go:172] (0xc0016a09a0) (0xc002327b80) Stream added, broadcasting: 1 I0507 00:07:49.070810 6 log.go:172] (0xc0016a09a0) Reply frame received for 1 I0507 00:07:49.070852 6 log.go:172] (0xc0016a09a0) (0xc001260280) Create stream I0507 00:07:49.070999 6 log.go:172] (0xc0016a09a0) (0xc001260280) Stream added, broadcasting: 3 I0507 00:07:49.071733 6 log.go:172] (0xc0016a09a0) Reply frame received for 3 I0507 00:07:49.071860 6 log.go:172] (0xc0016a09a0) (0xc001da2640) Create stream I0507 00:07:49.071887 6 log.go:172] (0xc0016a09a0) (0xc001da2640) Stream added, broadcasting: 5 I0507 00:07:49.072601 6 log.go:172] (0xc0016a09a0) Reply frame received for 5 I0507 00:07:49.130008 6 log.go:172] (0xc0016a09a0) Data frame received for 3 I0507 00:07:49.130046 6 log.go:172] (0xc001260280) (3) Data frame handling I0507 00:07:49.130061 6 log.go:172] (0xc001260280) (3) Data frame sent I0507 00:07:49.130074 6 log.go:172] (0xc0016a09a0) Data frame received for 3 I0507 00:07:49.130122 6 log.go:172] (0xc001260280) (3) Data frame handling I0507 00:07:49.130139 6 log.go:172] (0xc0016a09a0) Data frame received for 5 I0507 00:07:49.130150 6 log.go:172] (0xc001da2640) (5) Data frame handling I0507 00:07:49.131541 6 log.go:172] (0xc0016a09a0) Data frame received for 1 I0507 00:07:49.131579 6 log.go:172] (0xc002327b80) (1) Data frame handling I0507 00:07:49.131593 6 log.go:172] (0xc002327b80) (1) Data frame sent I0507 00:07:49.131606 6 log.go:172] (0xc0016a09a0) (0xc002327b80) Stream removed, broadcasting: 1 I0507 00:07:49.131693 6 log.go:172] (0xc0016a09a0) (0xc002327b80) Stream removed, broadcasting: 1 I0507 00:07:49.131716 6 log.go:172] (0xc0016a09a0) (0xc001260280) Stream removed, broadcasting: 3 I0507 00:07:49.131726 6 log.go:172] (0xc0016a09a0) (0xc001da2640) Stream removed, broadcasting: 5 May 7 00:07:49.131: INFO: Exec stderr: "" May 7 00:07:49.131: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2008 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:07:49.131: INFO: >>> kubeConfig: /root/.kube/config I0507 00:07:49.131835 6 log.go:172] (0xc0016a09a0) Go away received I0507 00:07:49.164315 6 log.go:172] (0xc001d133f0) (0xc00134fea0) Create stream I0507 00:07:49.164351 6 log.go:172] (0xc001d133f0) (0xc00134fea0) Stream added, broadcasting: 1 I0507 00:07:49.167014 6 log.go:172] (0xc001d133f0) Reply frame received for 1 I0507 00:07:49.167040 6 log.go:172] (0xc001d133f0) (0xc00134ff40) Create stream I0507 00:07:49.167047 6 log.go:172] (0xc001d133f0) (0xc00134ff40) Stream added, broadcasting: 3 I0507 00:07:49.167824 6 log.go:172] (0xc001d133f0) Reply frame received for 3 I0507 00:07:49.167863 6 log.go:172] (0xc001d133f0) (0xc001260320) Create stream I0507 00:07:49.167879 6 log.go:172] (0xc001d133f0) (0xc001260320) Stream added, broadcasting: 5 I0507 00:07:49.168645 6 log.go:172] (0xc001d133f0) Reply frame received for 5 I0507 00:07:49.252683 6 log.go:172] (0xc001d133f0) Data frame received for 5 I0507 00:07:49.252721 6 log.go:172] (0xc001260320) (5) Data frame handling I0507 00:07:49.252749 6 log.go:172] (0xc001d133f0) Data frame received for 3 I0507 00:07:49.252765 6 log.go:172] (0xc00134ff40) (3) Data frame handling I0507 00:07:49.252778 6 log.go:172] (0xc00134ff40) (3) Data frame sent I0507 00:07:49.252792 6 log.go:172] (0xc001d133f0) Data frame received for 3 I0507 00:07:49.252801 6 log.go:172] (0xc00134ff40) (3) Data frame handling I0507 00:07:49.254596 6 log.go:172] (0xc001d133f0) Data frame received for 1 I0507 00:07:49.254621 6 log.go:172] (0xc00134fea0) (1) Data frame handling I0507 00:07:49.254647 6 log.go:172] (0xc00134fea0) (1) Data frame sent I0507 00:07:49.254663 6 log.go:172] (0xc001d133f0) (0xc00134fea0) Stream removed, broadcasting: 1 I0507 00:07:49.254680 6 log.go:172] (0xc001d133f0) Go away received I0507 00:07:49.254823 6 log.go:172] (0xc001d133f0) (0xc00134fea0) Stream removed, broadcasting: 1 I0507 00:07:49.254849 6 log.go:172] (0xc001d133f0) (0xc00134ff40) Stream removed, broadcasting: 3 I0507 00:07:49.254856 6 log.go:172] (0xc001d133f0) (0xc001260320) Stream removed, broadcasting: 5 May 7 00:07:49.254: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 7 00:07:49.254: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2008 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:07:49.254: INFO: >>> kubeConfig: /root/.kube/config I0507 00:07:49.279810 6 log.go:172] (0xc001b6ec60) (0xc001da2e60) Create stream I0507 00:07:49.279833 6 log.go:172] (0xc001b6ec60) (0xc001da2e60) Stream added, broadcasting: 1 I0507 00:07:49.282177 6 log.go:172] (0xc001b6ec60) Reply frame received for 1 I0507 00:07:49.282207 6 log.go:172] (0xc001b6ec60) (0xc001da3220) Create stream I0507 00:07:49.282213 6 log.go:172] (0xc001b6ec60) (0xc001da3220) Stream added, broadcasting: 3 I0507 00:07:49.283009 6 log.go:172] (0xc001b6ec60) Reply frame received for 3 I0507 00:07:49.283031 6 log.go:172] (0xc001b6ec60) (0xc001488140) Create stream I0507 00:07:49.283039 6 log.go:172] (0xc001b6ec60) (0xc001488140) Stream added, broadcasting: 5 I0507 00:07:49.283812 6 log.go:172] (0xc001b6ec60) Reply frame received for 5 I0507 00:07:49.330592 6 log.go:172] (0xc001b6ec60) Data frame received for 3 I0507 00:07:49.330629 6 log.go:172] (0xc001da3220) (3) Data frame handling I0507 00:07:49.330654 6 log.go:172] (0xc001da3220) (3) Data frame sent I0507 00:07:49.330667 6 log.go:172] (0xc001b6ec60) Data frame received for 3 I0507 00:07:49.330675 6 log.go:172] (0xc001da3220) (3) Data frame handling I0507 00:07:49.330701 6 log.go:172] (0xc001b6ec60) Data frame received for 5 I0507 00:07:49.330726 6 log.go:172] (0xc001488140) (5) Data frame handling I0507 00:07:49.331921 6 log.go:172] (0xc001b6ec60) Data frame received for 1 I0507 00:07:49.331933 6 log.go:172] (0xc001da2e60) (1) Data frame handling I0507 00:07:49.331947 6 log.go:172] (0xc001da2e60) (1) Data frame sent I0507 00:07:49.331963 6 log.go:172] (0xc001b6ec60) (0xc001da2e60) Stream removed, broadcasting: 1 I0507 00:07:49.331974 6 log.go:172] (0xc001b6ec60) Go away received I0507 00:07:49.332064 6 log.go:172] (0xc001b6ec60) (0xc001da2e60) Stream removed, broadcasting: 1 I0507 00:07:49.332082 6 log.go:172] (0xc001b6ec60) (0xc001da3220) Stream removed, broadcasting: 3 I0507 00:07:49.332088 6 log.go:172] (0xc001b6ec60) (0xc001488140) Stream removed, broadcasting: 5 May 7 00:07:49.332: INFO: Exec stderr: "" May 7 00:07:49.332: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2008 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:07:49.332: INFO: >>> kubeConfig: /root/.kube/config I0507 00:07:49.354443 6 log.go:172] (0xc000ea4b00) (0xc001261400) Create stream I0507 00:07:49.354469 6 log.go:172] (0xc000ea4b00) (0xc001261400) Stream added, broadcasting: 1 I0507 00:07:49.356803 6 log.go:172] (0xc000ea4b00) Reply frame received for 1 I0507 00:07:49.356846 6 log.go:172] (0xc000ea4b00) (0xc0012615e0) Create stream I0507 00:07:49.356863 6 log.go:172] (0xc000ea4b00) (0xc0012615e0) Stream added, broadcasting: 3 I0507 00:07:49.357826 6 log.go:172] (0xc000ea4b00) Reply frame received for 3 I0507 00:07:49.357871 6 log.go:172] (0xc000ea4b00) (0xc0014881e0) Create stream I0507 00:07:49.357885 6 log.go:172] (0xc000ea4b00) (0xc0014881e0) Stream added, broadcasting: 5 I0507 00:07:49.358647 6 log.go:172] (0xc000ea4b00) Reply frame received for 5 I0507 00:07:49.419953 6 log.go:172] (0xc000ea4b00) Data frame received for 5 I0507 00:07:49.419985 6 log.go:172] (0xc0014881e0) (5) Data frame handling I0507 00:07:49.420004 6 log.go:172] (0xc000ea4b00) Data frame received for 3 I0507 00:07:49.420012 6 log.go:172] (0xc0012615e0) (3) Data frame handling I0507 00:07:49.420022 6 log.go:172] (0xc0012615e0) (3) Data frame sent I0507 00:07:49.420034 6 log.go:172] (0xc000ea4b00) Data frame received for 3 I0507 00:07:49.420043 6 log.go:172] (0xc0012615e0) (3) Data frame handling I0507 00:07:49.421059 6 log.go:172] (0xc000ea4b00) Data frame received for 1 I0507 00:07:49.421087 6 log.go:172] (0xc001261400) (1) Data frame handling I0507 00:07:49.421107 6 log.go:172] (0xc001261400) (1) Data frame sent I0507 00:07:49.421326 6 log.go:172] (0xc000ea4b00) (0xc001261400) Stream removed, broadcasting: 1 I0507 00:07:49.421354 6 log.go:172] (0xc000ea4b00) Go away received I0507 00:07:49.421443 6 log.go:172] (0xc000ea4b00) (0xc001261400) Stream removed, broadcasting: 1 I0507 00:07:49.421458 6 log.go:172] (0xc000ea4b00) (0xc0012615e0) Stream removed, broadcasting: 3 I0507 00:07:49.421471 6 log.go:172] (0xc000ea4b00) (0xc0014881e0) Stream removed, broadcasting: 5 May 7 00:07:49.421: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 7 00:07:49.421: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2008 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:07:49.421: INFO: >>> kubeConfig: /root/.kube/config I0507 00:07:49.451216 6 log.go:172] (0xc0016a0fd0) (0xc0011643c0) Create stream I0507 00:07:49.451250 6 log.go:172] (0xc0016a0fd0) (0xc0011643c0) Stream added, broadcasting: 1 I0507 00:07:49.453446 6 log.go:172] (0xc0016a0fd0) Reply frame received for 1 I0507 00:07:49.453492 6 log.go:172] (0xc0016a0fd0) (0xc0023d0000) Create stream I0507 00:07:49.453508 6 log.go:172] (0xc0016a0fd0) (0xc0023d0000) Stream added, broadcasting: 3 I0507 00:07:49.454763 6 log.go:172] (0xc0016a0fd0) Reply frame received for 3 I0507 00:07:49.454817 6 log.go:172] (0xc0016a0fd0) (0xc0023d00a0) Create stream I0507 00:07:49.454845 6 log.go:172] (0xc0016a0fd0) (0xc0023d00a0) Stream added, broadcasting: 5 I0507 00:07:49.455936 6 log.go:172] (0xc0016a0fd0) Reply frame received for 5 I0507 00:07:49.522280 6 log.go:172] (0xc0016a0fd0) Data frame received for 5 I0507 00:07:49.522307 6 log.go:172] (0xc0023d00a0) (5) Data frame handling I0507 00:07:49.522325 6 log.go:172] (0xc0016a0fd0) Data frame received for 3 I0507 00:07:49.522334 6 log.go:172] (0xc0023d0000) (3) Data frame handling I0507 00:07:49.522342 6 log.go:172] (0xc0023d0000) (3) Data frame sent I0507 00:07:49.522350 6 log.go:172] (0xc0016a0fd0) Data frame received for 3 I0507 00:07:49.522356 6 log.go:172] (0xc0023d0000) (3) Data frame handling I0507 00:07:49.523870 6 log.go:172] (0xc0016a0fd0) Data frame received for 1 I0507 00:07:49.523892 6 log.go:172] (0xc0011643c0) (1) Data frame handling I0507 00:07:49.523903 6 log.go:172] (0xc0011643c0) (1) Data frame sent I0507 00:07:49.523915 6 log.go:172] (0xc0016a0fd0) (0xc0011643c0) Stream removed, broadcasting: 1 I0507 00:07:49.523978 6 log.go:172] (0xc0016a0fd0) (0xc0011643c0) Stream removed, broadcasting: 1 I0507 00:07:49.523993 6 log.go:172] (0xc0016a0fd0) (0xc0023d0000) Stream removed, broadcasting: 3 I0507 00:07:49.524006 6 log.go:172] (0xc0016a0fd0) (0xc0023d00a0) Stream removed, broadcasting: 5 May 7 00:07:49.524: INFO: Exec stderr: "" May 7 00:07:49.524: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2008 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:07:49.524: INFO: >>> kubeConfig: /root/.kube/config I0507 00:07:49.524132 6 log.go:172] (0xc0016a0fd0) Go away received I0507 00:07:49.545274 6 log.go:172] (0xc000ea5130) (0xc001261d60) Create stream I0507 00:07:49.545298 6 log.go:172] (0xc000ea5130) (0xc001261d60) Stream added, broadcasting: 1 I0507 00:07:49.546755 6 log.go:172] (0xc000ea5130) Reply frame received for 1 I0507 00:07:49.546784 6 log.go:172] (0xc000ea5130) (0xc001da34a0) Create stream I0507 00:07:49.546794 6 log.go:172] (0xc000ea5130) (0xc001da34a0) Stream added, broadcasting: 3 I0507 00:07:49.547531 6 log.go:172] (0xc000ea5130) Reply frame received for 3 I0507 00:07:49.547573 6 log.go:172] (0xc000ea5130) (0xc0011645a0) Create stream I0507 00:07:49.547591 6 log.go:172] (0xc000ea5130) (0xc0011645a0) Stream added, broadcasting: 5 I0507 00:07:49.548314 6 log.go:172] (0xc000ea5130) Reply frame received for 5 I0507 00:07:49.600179 6 log.go:172] (0xc000ea5130) Data frame received for 3 I0507 00:07:49.600210 6 log.go:172] (0xc001da34a0) (3) Data frame handling I0507 00:07:49.600241 6 log.go:172] (0xc001da34a0) (3) Data frame sent I0507 00:07:49.600339 6 log.go:172] (0xc000ea5130) Data frame received for 3 I0507 00:07:49.600385 6 log.go:172] (0xc001da34a0) (3) Data frame handling I0507 00:07:49.600413 6 log.go:172] (0xc000ea5130) Data frame received for 5 I0507 00:07:49.600427 6 log.go:172] (0xc0011645a0) (5) Data frame handling I0507 00:07:49.601765 6 log.go:172] (0xc000ea5130) Data frame received for 1 I0507 00:07:49.601780 6 log.go:172] (0xc001261d60) (1) Data frame handling I0507 00:07:49.601797 6 log.go:172] (0xc001261d60) (1) Data frame sent I0507 00:07:49.601813 6 log.go:172] (0xc000ea5130) (0xc001261d60) Stream removed, broadcasting: 1 I0507 00:07:49.601883 6 log.go:172] (0xc000ea5130) (0xc001261d60) Stream removed, broadcasting: 1 I0507 00:07:49.601891 6 log.go:172] (0xc000ea5130) (0xc001da34a0) Stream removed, broadcasting: 3 I0507 00:07:49.601926 6 log.go:172] (0xc000ea5130) Go away received I0507 00:07:49.602021 6 log.go:172] (0xc000ea5130) (0xc0011645a0) Stream removed, broadcasting: 5 May 7 00:07:49.602: INFO: Exec stderr: "" May 7 00:07:49.602: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2008 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:07:49.602: INFO: >>> kubeConfig: /root/.kube/config I0507 00:07:49.879228 6 log.go:172] (0xc0016a1760) (0xc0011648c0) Create stream I0507 00:07:49.879255 6 log.go:172] (0xc0016a1760) (0xc0011648c0) Stream added, broadcasting: 1 I0507 00:07:49.881714 6 log.go:172] (0xc0016a1760) Reply frame received for 1 I0507 00:07:49.881740 6 log.go:172] (0xc0016a1760) (0xc0023d0280) Create stream I0507 00:07:49.881746 6 log.go:172] (0xc0016a1760) (0xc0023d0280) Stream added, broadcasting: 3 I0507 00:07:49.882456 6 log.go:172] (0xc0016a1760) Reply frame received for 3 I0507 00:07:49.882497 6 log.go:172] (0xc0016a1760) (0xc001da35e0) Create stream I0507 00:07:49.882507 6 log.go:172] (0xc0016a1760) (0xc001da35e0) Stream added, broadcasting: 5 I0507 00:07:49.883276 6 log.go:172] (0xc0016a1760) Reply frame received for 5 I0507 00:07:49.938050 6 log.go:172] (0xc0016a1760) Data frame received for 5 I0507 00:07:49.938078 6 log.go:172] (0xc001da35e0) (5) Data frame handling I0507 00:07:49.938108 6 log.go:172] (0xc0016a1760) Data frame received for 3 I0507 00:07:49.938140 6 log.go:172] (0xc0023d0280) (3) Data frame handling I0507 00:07:49.938166 6 log.go:172] (0xc0023d0280) (3) Data frame sent I0507 00:07:49.938179 6 log.go:172] (0xc0016a1760) Data frame received for 3 I0507 00:07:49.938191 6 log.go:172] (0xc0023d0280) (3) Data frame handling I0507 00:07:49.939900 6 log.go:172] (0xc0016a1760) Data frame received for 1 I0507 00:07:49.939947 6 log.go:172] (0xc0011648c0) (1) Data frame handling I0507 00:07:49.939971 6 log.go:172] (0xc0011648c0) (1) Data frame sent I0507 00:07:49.939999 6 log.go:172] (0xc0016a1760) (0xc0011648c0) Stream removed, broadcasting: 1 I0507 00:07:49.940070 6 log.go:172] (0xc0016a1760) Go away received I0507 00:07:49.940132 6 log.go:172] (0xc0016a1760) (0xc0011648c0) Stream removed, broadcasting: 1 I0507 00:07:49.940171 6 log.go:172] (0xc0016a1760) (0xc0023d0280) Stream removed, broadcasting: 3 I0507 00:07:49.940205 6 log.go:172] (0xc0016a1760) (0xc001da35e0) Stream removed, broadcasting: 5 May 7 00:07:49.940: INFO: Exec stderr: "" May 7 00:07:49.940: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2008 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:07:49.940: INFO: >>> kubeConfig: /root/.kube/config I0507 00:07:50.075070 6 log.go:172] (0xc0037e0370) (0xc0023d0960) Create stream I0507 00:07:50.075110 6 log.go:172] (0xc0037e0370) (0xc0023d0960) Stream added, broadcasting: 1 I0507 00:07:50.078109 6 log.go:172] (0xc0037e0370) Reply frame received for 1 I0507 00:07:50.078153 6 log.go:172] (0xc0037e0370) (0xc001488280) Create stream I0507 00:07:50.078164 6 log.go:172] (0xc0037e0370) (0xc001488280) Stream added, broadcasting: 3 I0507 00:07:50.078970 6 log.go:172] (0xc0037e0370) Reply frame received for 3 I0507 00:07:50.079013 6 log.go:172] (0xc0037e0370) (0xc001261ea0) Create stream I0507 00:07:50.079026 6 log.go:172] (0xc0037e0370) (0xc001261ea0) Stream added, broadcasting: 5 I0507 00:07:50.079848 6 log.go:172] (0xc0037e0370) Reply frame received for 5 I0507 00:07:50.136167 6 log.go:172] (0xc0037e0370) Data frame received for 5 I0507 00:07:50.136193 6 log.go:172] (0xc001261ea0) (5) Data frame handling I0507 00:07:50.136212 6 log.go:172] (0xc0037e0370) Data frame received for 3 I0507 00:07:50.136216 6 log.go:172] (0xc001488280) (3) Data frame handling I0507 00:07:50.136223 6 log.go:172] (0xc001488280) (3) Data frame sent I0507 00:07:50.136242 6 log.go:172] (0xc0037e0370) Data frame received for 3 I0507 00:07:50.136249 6 log.go:172] (0xc001488280) (3) Data frame handling I0507 00:07:50.137568 6 log.go:172] (0xc0037e0370) Data frame received for 1 I0507 00:07:50.137592 6 log.go:172] (0xc0023d0960) (1) Data frame handling I0507 00:07:50.137611 6 log.go:172] (0xc0023d0960) (1) Data frame sent I0507 00:07:50.137821 6 log.go:172] (0xc0037e0370) (0xc0023d0960) Stream removed, broadcasting: 1 I0507 00:07:50.137837 6 log.go:172] (0xc0037e0370) Go away received I0507 00:07:50.137912 6 log.go:172] (0xc0037e0370) (0xc0023d0960) Stream removed, broadcasting: 1 I0507 00:07:50.137932 6 log.go:172] (0xc0037e0370) (0xc001488280) Stream removed, broadcasting: 3 I0507 00:07:50.137947 6 log.go:172] (0xc0037e0370) (0xc001261ea0) Stream removed, broadcasting: 5 May 7 00:07:50.137: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:07:50.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2008" for this suite. • [SLOW TEST:16.699 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3880,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:07:50.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 7 00:07:50.695: INFO: Waiting up to 5m0s for pod "pod-66ae42f0-91c6-42d3-9856-330716126b04" in namespace "emptydir-7441" to be "success or failure" May 7 00:07:50.736: INFO: Pod "pod-66ae42f0-91c6-42d3-9856-330716126b04": Phase="Pending", Reason="", readiness=false. Elapsed: 40.523398ms May 7 00:07:52.740: INFO: Pod "pod-66ae42f0-91c6-42d3-9856-330716126b04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044825612s May 7 00:07:54.919: INFO: Pod "pod-66ae42f0-91c6-42d3-9856-330716126b04": Phase="Pending", Reason="", readiness=false. Elapsed: 4.223431065s May 7 00:07:57.662: INFO: Pod "pod-66ae42f0-91c6-42d3-9856-330716126b04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.967104051s STEP: Saw pod success May 7 00:07:57.662: INFO: Pod "pod-66ae42f0-91c6-42d3-9856-330716126b04" satisfied condition "success or failure" May 7 00:07:57.667: INFO: Trying to get logs from node jerma-worker pod pod-66ae42f0-91c6-42d3-9856-330716126b04 container test-container: STEP: delete the pod May 7 00:07:58.458: INFO: Waiting for pod pod-66ae42f0-91c6-42d3-9856-330716126b04 to disappear May 7 00:07:58.462: INFO: Pod pod-66ae42f0-91c6-42d3-9856-330716126b04 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:07:58.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7441" for this suite. • [SLOW TEST:8.463 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3884,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:07:58.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-3548 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-3548 I0507 00:08:00.605560 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3548, replica count: 2 I0507 00:08:03.655987 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0507 00:08:06.656244 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 7 00:08:06.656: INFO: Creating new exec pod May 7 00:08:13.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3548 execpodzdtwp -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 7 00:08:22.259: INFO: stderr: "I0507 00:08:22.168842 4398 log.go:172] (0xc000470160) (0xc00055e780) Create stream\nI0507 00:08:22.168876 4398 log.go:172] (0xc000470160) (0xc00055e780) Stream added, broadcasting: 1\nI0507 00:08:22.170827 4398 log.go:172] (0xc000470160) Reply frame received for 1\nI0507 00:08:22.170853 4398 log.go:172] (0xc000470160) (0xc000757540) Create stream\nI0507 00:08:22.170860 4398 log.go:172] (0xc000470160) (0xc000757540) Stream added, broadcasting: 3\nI0507 00:08:22.171507 4398 log.go:172] (0xc000470160) Reply frame received for 3\nI0507 00:08:22.171544 4398 log.go:172] (0xc000470160) (0xc0007575e0) Create stream\nI0507 00:08:22.171555 4398 log.go:172] (0xc000470160) (0xc0007575e0) Stream added, broadcasting: 5\nI0507 00:08:22.172165 4398 log.go:172] (0xc000470160) Reply frame received for 5\nI0507 00:08:22.251908 4398 log.go:172] (0xc000470160) Data frame received for 5\nI0507 00:08:22.251948 4398 log.go:172] (0xc0007575e0) (5) Data frame handling\nI0507 00:08:22.251976 4398 log.go:172] (0xc0007575e0) (5) Data frame sent\nI0507 00:08:22.251989 4398 log.go:172] (0xc000470160) Data frame received for 5\nI0507 00:08:22.252007 4398 log.go:172] (0xc0007575e0) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0507 00:08:22.252035 4398 log.go:172] (0xc0007575e0) (5) Data frame sent\nI0507 00:08:22.252354 4398 log.go:172] (0xc000470160) Data frame received for 5\nI0507 00:08:22.252378 4398 log.go:172] (0xc0007575e0) (5) Data frame handling\nI0507 00:08:22.252408 4398 log.go:172] (0xc000470160) Data frame received for 3\nI0507 00:08:22.252424 4398 log.go:172] (0xc000757540) (3) Data frame handling\nI0507 00:08:22.254459 4398 log.go:172] (0xc000470160) Data frame received for 1\nI0507 00:08:22.254483 4398 log.go:172] (0xc00055e780) (1) Data frame handling\nI0507 00:08:22.254503 4398 log.go:172] (0xc00055e780) (1) Data frame sent\nI0507 00:08:22.254537 4398 log.go:172] (0xc000470160) (0xc00055e780) Stream removed, broadcasting: 1\nI0507 00:08:22.254556 4398 log.go:172] (0xc000470160) Go away received\nI0507 00:08:22.254948 4398 log.go:172] (0xc000470160) (0xc00055e780) Stream removed, broadcasting: 1\nI0507 00:08:22.254964 4398 log.go:172] (0xc000470160) (0xc000757540) Stream removed, broadcasting: 3\nI0507 00:08:22.254970 4398 log.go:172] (0xc000470160) (0xc0007575e0) Stream removed, broadcasting: 5\n" May 7 00:08:22.259: INFO: stdout: "" May 7 00:08:22.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3548 execpodzdtwp -- /bin/sh -x -c nc -zv -t -w 2 10.105.65.147 80' May 7 00:08:22.455: INFO: stderr: "I0507 00:08:22.381579 4430 log.go:172] (0xc00094a790) (0xc0009fa3c0) Create stream\nI0507 00:08:22.381634 4430 log.go:172] (0xc00094a790) (0xc0009fa3c0) Stream added, broadcasting: 1\nI0507 00:08:22.385299 4430 log.go:172] (0xc00094a790) Reply frame received for 1\nI0507 00:08:22.385339 4430 log.go:172] (0xc00094a790) (0xc000593d60) Create stream\nI0507 00:08:22.385354 4430 log.go:172] (0xc00094a790) (0xc000593d60) Stream added, broadcasting: 3\nI0507 00:08:22.386151 4430 log.go:172] (0xc00094a790) Reply frame received for 3\nI0507 00:08:22.386185 4430 log.go:172] (0xc00094a790) (0xc0000dc960) Create stream\nI0507 00:08:22.386194 4430 log.go:172] (0xc00094a790) (0xc0000dc960) Stream added, broadcasting: 5\nI0507 00:08:22.386916 4430 log.go:172] (0xc00094a790) Reply frame received for 5\nI0507 00:08:22.449392 4430 log.go:172] (0xc00094a790) Data frame received for 3\nI0507 00:08:22.449440 4430 log.go:172] (0xc000593d60) (3) Data frame handling\nI0507 00:08:22.449475 4430 log.go:172] (0xc00094a790) Data frame received for 5\nI0507 00:08:22.449488 4430 log.go:172] (0xc0000dc960) (5) Data frame handling\nI0507 00:08:22.449499 4430 log.go:172] (0xc0000dc960) (5) Data frame sent\nI0507 00:08:22.449516 4430 log.go:172] (0xc00094a790) Data frame received for 5\nI0507 00:08:22.449557 4430 log.go:172] (0xc0000dc960) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.65.147 80\nConnection to 10.105.65.147 80 port [tcp/http] succeeded!\nI0507 00:08:22.450855 4430 log.go:172] (0xc00094a790) Data frame received for 1\nI0507 00:08:22.450885 4430 log.go:172] (0xc0009fa3c0) (1) Data frame handling\nI0507 00:08:22.450899 4430 log.go:172] (0xc0009fa3c0) (1) Data frame sent\nI0507 00:08:22.450912 4430 log.go:172] (0xc00094a790) (0xc0009fa3c0) Stream removed, broadcasting: 1\nI0507 00:08:22.450925 4430 log.go:172] (0xc00094a790) Go away received\nI0507 00:08:22.451306 4430 log.go:172] (0xc00094a790) (0xc0009fa3c0) Stream removed, broadcasting: 1\nI0507 00:08:22.451317 4430 log.go:172] (0xc00094a790) (0xc000593d60) Stream removed, broadcasting: 3\nI0507 00:08:22.451324 4430 log.go:172] (0xc00094a790) (0xc0000dc960) Stream removed, broadcasting: 5\n" May 7 00:08:22.456: INFO: stdout: "" May 7 00:08:22.456: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:08:22.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3548" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:23.786 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":241,"skipped":3887,"failed":0} SSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:08:22.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 7 00:08:50.647: INFO: Container started at 2020-05-07 00:08:25 +0000 UTC, pod became ready at 2020-05-07 00:08:49 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:08:50.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3313" for this suite. • [SLOW TEST:28.161 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3890,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:08:50.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 7 00:08:50.795: INFO: Waiting up to 5m0s for pod "downward-api-3cf0df34-2d6c-4034-92ae-e4c491a836df" in namespace "downward-api-2083" to be "success or failure" May 7 00:08:50.802: INFO: Pod "downward-api-3cf0df34-2d6c-4034-92ae-e4c491a836df": Phase="Pending", Reason="", readiness=false. Elapsed: 6.614196ms May 7 00:08:52.860: INFO: Pod "downward-api-3cf0df34-2d6c-4034-92ae-e4c491a836df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064668965s May 7 00:08:54.864: INFO: Pod "downward-api-3cf0df34-2d6c-4034-92ae-e4c491a836df": Phase="Running", Reason="", readiness=true. Elapsed: 4.068364298s May 7 00:08:57.118: INFO: Pod "downward-api-3cf0df34-2d6c-4034-92ae-e4c491a836df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.322662665s STEP: Saw pod success May 7 00:08:57.118: INFO: Pod "downward-api-3cf0df34-2d6c-4034-92ae-e4c491a836df" satisfied condition "success or failure" May 7 00:08:57.121: INFO: Trying to get logs from node jerma-worker pod downward-api-3cf0df34-2d6c-4034-92ae-e4c491a836df container dapi-container: STEP: delete the pod May 7 00:08:57.810: INFO: Waiting for pod downward-api-3cf0df34-2d6c-4034-92ae-e4c491a836df to disappear May 7 00:08:58.106: INFO: Pod downward-api-3cf0df34-2d6c-4034-92ae-e4c491a836df no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:08:58.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2083" for this suite. • [SLOW TEST:7.867 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":3897,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:08:58.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:09:06.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8913" for this suite. • [SLOW TEST:7.998 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":244,"skipped":3904,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:09:06.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 7 00:09:06.690: INFO: >>> kubeConfig: /root/.kube/config May 7 00:09:09.196: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:09:21.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7650" for this suite. • [SLOW TEST:14.723 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":245,"skipped":3914,"failed":0} SSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:09:21.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 7 00:09:26.206: INFO: Successfully updated pod "pod-update-af48a276-f854-427e-a493-d3aea03c7f03" STEP: verifying the updated pod is in kubernetes May 7 00:09:26.261: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:09:26.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3039" for this suite. • [SLOW TEST:5.026 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":3921,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:09:26.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs May 7 00:09:27.139: INFO: Waiting up to 5m0s for pod "pod-5a7a69d8-9e7a-4123-89df-89fba5251e16" in namespace "emptydir-2248" to be "success or failure" May 7 00:09:27.399: INFO: Pod "pod-5a7a69d8-9e7a-4123-89df-89fba5251e16": Phase="Pending", Reason="", readiness=false. Elapsed: 260.356416ms May 7 00:09:29.403: INFO: Pod "pod-5a7a69d8-9e7a-4123-89df-89fba5251e16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263567752s May 7 00:09:31.407: INFO: Pod "pod-5a7a69d8-9e7a-4123-89df-89fba5251e16": Phase="Running", Reason="", readiness=true. Elapsed: 4.267628541s May 7 00:09:33.411: INFO: Pod "pod-5a7a69d8-9e7a-4123-89df-89fba5251e16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.271551062s STEP: Saw pod success May 7 00:09:33.411: INFO: Pod "pod-5a7a69d8-9e7a-4123-89df-89fba5251e16" satisfied condition "success or failure" May 7 00:09:33.413: INFO: Trying to get logs from node jerma-worker2 pod pod-5a7a69d8-9e7a-4123-89df-89fba5251e16 container test-container: STEP: delete the pod May 7 00:09:33.439: INFO: Waiting for pod pod-5a7a69d8-9e7a-4123-89df-89fba5251e16 to disappear May 7 00:09:33.444: INFO: Pod pod-5a7a69d8-9e7a-4123-89df-89fba5251e16 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:09:33.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2248" for this suite. • [SLOW TEST:7.181 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":247,"skipped":3940,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:09:33.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 7 00:09:33.608: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 7 00:09:44.064: INFO: >>> kubeConfig: /root/.kube/config May 7 00:09:46.959: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:09:57.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6022" for this suite. • [SLOW TEST:24.127 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":248,"skipped":3996,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:09:57.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 7 00:09:59.135: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 7 00:09:59.174: INFO: Waiting for terminating namespaces to be deleted... May 7 00:09:59.176: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 7 00:09:59.511: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 7 00:09:59.511: INFO: Container kindnet-cni ready: true, restart count 0 May 7 00:09:59.511: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 7 00:09:59.511: INFO: Container kube-proxy ready: true, restart count 0 May 7 00:09:59.511: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 7 00:09:59.603: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 7 00:09:59.603: INFO: Container kindnet-cni ready: true, restart count 0 May 7 00:09:59.603: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 7 00:09:59.603: INFO: Container kube-bench ready: false, restart count 0 May 7 00:09:59.603: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 7 00:09:59.603: INFO: Container kube-proxy ready: true, restart count 0 May 7 00:09:59.603: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 7 00:09:59.603: INFO: Container kube-hunter ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 May 7 00:10:00.253: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker May 7 00:10:00.253: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 May 7 00:10:00.253: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker May 7 00:10:00.253: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 7 00:10:00.253: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker May 7 00:10:00.292: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-2f61151e-4a3a-40b8-8607-9333d9a74c53.160c9675b4d165e0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5927/filler-pod-2f61151e-4a3a-40b8-8607-9333d9a74c53 to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-2f61151e-4a3a-40b8-8607-9333d9a74c53.160c967669fcb6c8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-2f61151e-4a3a-40b8-8607-9333d9a74c53.160c96772901d911], Reason = [Created], Message = [Created container filler-pod-2f61151e-4a3a-40b8-8607-9333d9a74c53] STEP: Considering event: Type = [Normal], Name = [filler-pod-2f61151e-4a3a-40b8-8607-9333d9a74c53.160c967746aeb7ed], Reason = [Started], Message = [Started container filler-pod-2f61151e-4a3a-40b8-8607-9333d9a74c53] STEP: Considering event: Type = [Normal], Name = [filler-pod-cf5e62f3-1d8e-43da-9212-834ce15eddf4.160c9675a854627c], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5927/filler-pod-cf5e62f3-1d8e-43da-9212-834ce15eddf4 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-cf5e62f3-1d8e-43da-9212-834ce15eddf4.160c967667b4242c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-cf5e62f3-1d8e-43da-9212-834ce15eddf4.160c9677386b24c3], Reason = [Created], Message = [Created container filler-pod-cf5e62f3-1d8e-43da-9212-834ce15eddf4] STEP: Considering event: Type = [Normal], Name = [filler-pod-cf5e62f3-1d8e-43da-9212-834ce15eddf4.160c96774eb5bcf2], Reason = [Started], Message = [Started container filler-pod-cf5e62f3-1d8e-43da-9212-834ce15eddf4] STEP: Considering event: Type = [Warning], Name = [additional-pod.160c9677926c253e], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:10:10.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5927" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:12.543 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":249,"skipped":4034,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:10:10.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 7 00:10:10.282: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:10:26.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4518" for this suite. • [SLOW TEST:16.757 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":250,"skipped":4079,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:10:26.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 7 00:10:27.191: INFO: Waiting up to 5m0s for pod "pod-38a74f04-aa08-4dc9-baf7-5d96e43bc7c6" in namespace "emptydir-7662" to be "success or failure" May 7 00:10:27.205: INFO: Pod "pod-38a74f04-aa08-4dc9-baf7-5d96e43bc7c6": Phase="Pending", Reason="", readiness=false. Elapsed: 13.838903ms May 7 00:10:29.360: INFO: Pod "pod-38a74f04-aa08-4dc9-baf7-5d96e43bc7c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.16831979s May 7 00:10:31.364: INFO: Pod "pod-38a74f04-aa08-4dc9-baf7-5d96e43bc7c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173071929s May 7 00:10:33.372: INFO: Pod "pod-38a74f04-aa08-4dc9-baf7-5d96e43bc7c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.18046874s STEP: Saw pod success May 7 00:10:33.372: INFO: Pod "pod-38a74f04-aa08-4dc9-baf7-5d96e43bc7c6" satisfied condition "success or failure" May 7 00:10:33.374: INFO: Trying to get logs from node jerma-worker pod pod-38a74f04-aa08-4dc9-baf7-5d96e43bc7c6 container test-container: STEP: delete the pod May 7 00:10:33.387: INFO: Waiting for pod pod-38a74f04-aa08-4dc9-baf7-5d96e43bc7c6 to disappear May 7 00:10:33.392: INFO: Pod pod-38a74f04-aa08-4dc9-baf7-5d96e43bc7c6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:10:33.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7662" for this suite. • [SLOW TEST:6.518 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":4087,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:10:33.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5121.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5121.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 7 00:10:51.926: INFO: DNS probes using dns-5121/dns-test-a2e2afe9-a1fe-4bc8-92c2-aa63eb8f95de succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:10:51.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5121" for this suite. • [SLOW TEST:18.659 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":252,"skipped":4102,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:10:52.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 7 00:10:54.082: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 7 00:10:56.090: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407054, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407054, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407054, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407054, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 7 00:10:58.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407054, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407054, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407054, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724407054, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 7 00:11:01.155: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:11:01.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7713" for this suite. STEP: Destroying namespace "webhook-7713-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.111 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":253,"skipped":4124,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:11:02.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-2a66690a-5d23-47cb-b3fc-f7d0e97b595a STEP: Creating a pod to test consume configMaps May 7 00:11:02.250: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ad395ffa-42d4-408d-9cd3-119730baea4e" in namespace "projected-9806" to be "success or failure" May 7 00:11:02.316: INFO: Pod "pod-projected-configmaps-ad395ffa-42d4-408d-9cd3-119730baea4e": Phase="Pending", Reason="", readiness=false. Elapsed: 66.553592ms May 7 00:11:04.321: INFO: Pod "pod-projected-configmaps-ad395ffa-42d4-408d-9cd3-119730baea4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071165957s May 7 00:11:06.329: INFO: Pod "pod-projected-configmaps-ad395ffa-42d4-408d-9cd3-119730baea4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.078680402s STEP: Saw pod success May 7 00:11:06.329: INFO: Pod "pod-projected-configmaps-ad395ffa-42d4-408d-9cd3-119730baea4e" satisfied condition "success or failure" May 7 00:11:06.331: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-ad395ffa-42d4-408d-9cd3-119730baea4e container projected-configmap-volume-test: STEP: delete the pod May 7 00:11:06.363: INFO: Waiting for pod pod-projected-configmaps-ad395ffa-42d4-408d-9cd3-119730baea4e to disappear May 7 00:11:06.406: INFO: Pod pod-projected-configmaps-ad395ffa-42d4-408d-9cd3-119730baea4e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:11:06.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9806" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":4203,"failed":0} SS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:11:06.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-6c265502-a009-4109-9936-78c088789411 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:11:15.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7586" for this suite. • [SLOW TEST:8.836 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4205,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:11:15.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-pq4w STEP: Creating a pod to test atomic-volume-subpath May 7 00:11:15.649: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-pq4w" in namespace "subpath-2010" to be "success or failure" May 7 00:11:15.688: INFO: Pod "pod-subpath-test-secret-pq4w": Phase="Pending", Reason="", readiness=false. Elapsed: 38.419882ms May 7 00:11:18.144: INFO: Pod "pod-subpath-test-secret-pq4w": Phase="Pending", Reason="", readiness=false. Elapsed: 2.49427865s May 7 00:11:20.149: INFO: Pod "pod-subpath-test-secret-pq4w": Phase="Pending", Reason="", readiness=false. Elapsed: 4.499450739s May 7 00:11:22.306: INFO: Pod "pod-subpath-test-secret-pq4w": Phase="Running", Reason="", readiness=true. Elapsed: 6.656127473s May 7 00:11:24.309: INFO: Pod "pod-subpath-test-secret-pq4w": Phase="Running", Reason="", readiness=true. Elapsed: 8.659578663s May 7 00:11:26.461: INFO: Pod "pod-subpath-test-secret-pq4w": Phase="Running", Reason="", readiness=true. Elapsed: 10.811773289s May 7 00:11:28.465: INFO: Pod "pod-subpath-test-secret-pq4w": Phase="Running", Reason="", readiness=true. Elapsed: 12.815592362s May 7 00:11:30.468: INFO: Pod "pod-subpath-test-secret-pq4w": Phase="Running", Reason="", readiness=true. Elapsed: 14.819009154s May 7 00:11:32.472: INFO: Pod "pod-subpath-test-secret-pq4w": Phase="Running", Reason="", readiness=true. Elapsed: 16.823017257s May 7 00:11:34.476: INFO: Pod "pod-subpath-test-secret-pq4w": Phase="Running", Reason="", readiness=true. Elapsed: 18.826912834s May 7 00:11:36.479: INFO: Pod "pod-subpath-test-secret-pq4w": Phase="Running", Reason="", readiness=true. Elapsed: 20.829979296s May 7 00:11:38.483: INFO: Pod "pod-subpath-test-secret-pq4w": Phase="Running", Reason="", readiness=true. Elapsed: 22.833560287s May 7 00:11:40.486: INFO: Pod "pod-subpath-test-secret-pq4w": Phase="Running", Reason="", readiness=true. Elapsed: 24.836744892s May 7 00:11:42.491: INFO: Pod "pod-subpath-test-secret-pq4w": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.841231472s STEP: Saw pod success May 7 00:11:42.491: INFO: Pod "pod-subpath-test-secret-pq4w" satisfied condition "success or failure" May 7 00:11:42.494: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-pq4w container test-container-subpath-secret-pq4w: STEP: delete the pod May 7 00:11:42.513: INFO: Waiting for pod pod-subpath-test-secret-pq4w to disappear May 7 00:11:42.523: INFO: Pod pod-subpath-test-secret-pq4w no longer exists STEP: Deleting pod pod-subpath-test-secret-pq4w May 7 00:11:42.523: INFO: Deleting pod "pod-subpath-test-secret-pq4w" in namespace "subpath-2010" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:11:42.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2010" for this suite. • [SLOW TEST:27.281 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":256,"skipped":4233,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:11:42.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9977.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9977.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9977.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9977.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9977.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9977.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 7 00:11:53.201: INFO: DNS probes using dns-9977/dns-test-3f2ee4a3-4cbe-4206-b9f4-a6c31d26ae1d succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:11:53.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9977" for this suite. • [SLOW TEST:11.306 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":257,"skipped":4239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:11:53.840: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0507 00:12:07.211841 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 7 00:12:07.211: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:12:07.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3127" for this suite. • [SLOW TEST:13.444 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":258,"skipped":4262,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:12:07.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 7 00:12:09.220: INFO: created pod pod-service-account-defaultsa May 7 00:12:09.220: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 7 00:12:09.330: INFO: created pod pod-service-account-mountsa May 7 00:12:09.330: INFO: pod pod-service-account-mountsa service account token volume mount: true May 7 00:12:09.348: INFO: created pod pod-service-account-nomountsa May 7 00:12:09.348: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 7 00:12:09.396: INFO: created pod pod-service-account-defaultsa-mountspec May 7 00:12:09.396: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 7 00:12:09.402: INFO: created pod pod-service-account-mountsa-mountspec May 7 00:12:09.402: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 7 00:12:09.480: INFO: created pod pod-service-account-nomountsa-mountspec May 7 00:12:09.480: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 7 00:12:09.485: INFO: created pod pod-service-account-defaultsa-nomountspec May 7 00:12:09.485: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 7 00:12:09.528: INFO: created pod pod-service-account-mountsa-nomountspec May 7 00:12:09.528: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 7 00:12:09.543: INFO: created pod pod-service-account-nomountsa-nomountspec May 7 00:12:09.543: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:12:09.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3730" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":259,"skipped":4281,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:12:09.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 7 00:12:19.746: INFO: 10 pods remaining May 7 00:12:19.746: INFO: 10 pods has nil DeletionTimestamp May 7 00:12:19.746: INFO: May 7 00:12:20.680: INFO: 9 pods remaining May 7 00:12:20.680: INFO: 0 pods has nil DeletionTimestamp May 7 00:12:20.680: INFO: May 7 00:12:21.409: INFO: 0 pods remaining May 7 00:12:21.409: INFO: 0 pods has nil DeletionTimestamp May 7 00:12:21.409: INFO: STEP: Gathering metrics W0507 00:12:23.272151 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 7 00:12:23.272: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:12:23.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1349" for this suite. • [SLOW TEST:14.545 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":260,"skipped":4318,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:12:24.537: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 7 00:12:26.380: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:12:43.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3527" for this suite. • [SLOW TEST:19.192 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":261,"skipped":4338,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:12:43.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:12:44.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3961" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4375,"failed":0} SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:12:44.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8971 STEP: creating a selector STEP: Creating the service pods in kubernetes May 7 00:12:44.850: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 7 00:13:15.190: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.226 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8971 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:13:15.190: INFO: >>> kubeConfig: /root/.kube/config I0507 00:13:15.227413 6 log.go:172] (0xc00542d1e0) (0xc0016e5680) Create stream I0507 00:13:15.227448 6 log.go:172] (0xc00542d1e0) (0xc0016e5680) Stream added, broadcasting: 1 I0507 00:13:15.228968 6 log.go:172] (0xc00542d1e0) Reply frame received for 1 I0507 00:13:15.228996 6 log.go:172] (0xc00542d1e0) (0xc001a03680) Create stream I0507 00:13:15.229005 6 log.go:172] (0xc00542d1e0) (0xc001a03680) Stream added, broadcasting: 3 I0507 00:13:15.229810 6 log.go:172] (0xc00542d1e0) Reply frame received for 3 I0507 00:13:15.229837 6 log.go:172] (0xc00542d1e0) (0xc001a03ae0) Create stream I0507 00:13:15.229846 6 log.go:172] (0xc00542d1e0) (0xc001a03ae0) Stream added, broadcasting: 5 I0507 00:13:15.230465 6 log.go:172] (0xc00542d1e0) Reply frame received for 5 I0507 00:13:16.287756 6 log.go:172] (0xc00542d1e0) Data frame received for 5 I0507 00:13:16.287810 6 log.go:172] (0xc001a03ae0) (5) Data frame handling I0507 00:13:16.287844 6 log.go:172] (0xc00542d1e0) Data frame received for 3 I0507 00:13:16.287863 6 log.go:172] (0xc001a03680) (3) Data frame handling I0507 00:13:16.287878 6 log.go:172] (0xc001a03680) (3) Data frame sent I0507 00:13:16.287901 6 log.go:172] (0xc00542d1e0) Data frame received for 3 I0507 00:13:16.287920 6 log.go:172] (0xc001a03680) (3) Data frame handling I0507 00:13:16.290007 6 log.go:172] (0xc00542d1e0) Data frame received for 1 I0507 00:13:16.290033 6 log.go:172] (0xc0016e5680) (1) Data frame handling I0507 00:13:16.290056 6 log.go:172] (0xc0016e5680) (1) Data frame sent I0507 00:13:16.290079 6 log.go:172] (0xc00542d1e0) (0xc0016e5680) Stream removed, broadcasting: 1 I0507 00:13:16.290102 6 log.go:172] (0xc00542d1e0) Go away received I0507 00:13:16.290193 6 log.go:172] (0xc00542d1e0) (0xc0016e5680) Stream removed, broadcasting: 1 I0507 00:13:16.290210 6 log.go:172] (0xc00542d1e0) (0xc001a03680) Stream removed, broadcasting: 3 I0507 00:13:16.290216 6 log.go:172] (0xc00542d1e0) (0xc001a03ae0) Stream removed, broadcasting: 5 May 7 00:13:16.290: INFO: Found all expected endpoints: [netserver-0] May 7 00:13:16.293: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.130 8081 | grep -v '^\s*$'] Namespace:pod-network-test-8971 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:13:16.293: INFO: >>> kubeConfig: /root/.kube/config I0507 00:13:16.438438 6 log.go:172] (0xc002a320b0) (0xc0016e5c20) Create stream I0507 00:13:16.438476 6 log.go:172] (0xc002a320b0) (0xc0016e5c20) Stream added, broadcasting: 1 I0507 00:13:16.440008 6 log.go:172] (0xc002a320b0) Reply frame received for 1 I0507 00:13:16.440042 6 log.go:172] (0xc002a320b0) (0xc0016e5d60) Create stream I0507 00:13:16.440055 6 log.go:172] (0xc002a320b0) (0xc0016e5d60) Stream added, broadcasting: 3 I0507 00:13:16.440860 6 log.go:172] (0xc002a320b0) Reply frame received for 3 I0507 00:13:16.440892 6 log.go:172] (0xc002a320b0) (0xc001a03cc0) Create stream I0507 00:13:16.440905 6 log.go:172] (0xc002a320b0) (0xc001a03cc0) Stream added, broadcasting: 5 I0507 00:13:16.441889 6 log.go:172] (0xc002a320b0) Reply frame received for 5 I0507 00:13:17.510272 6 log.go:172] (0xc002a320b0) Data frame received for 3 I0507 00:13:17.510355 6 log.go:172] (0xc0016e5d60) (3) Data frame handling I0507 00:13:17.510404 6 log.go:172] (0xc0016e5d60) (3) Data frame sent I0507 00:13:17.510443 6 log.go:172] (0xc002a320b0) Data frame received for 3 I0507 00:13:17.510463 6 log.go:172] (0xc0016e5d60) (3) Data frame handling I0507 00:13:17.510517 6 log.go:172] (0xc002a320b0) Data frame received for 5 I0507 00:13:17.510550 6 log.go:172] (0xc001a03cc0) (5) Data frame handling I0507 00:13:17.512276 6 log.go:172] (0xc002a320b0) Data frame received for 1 I0507 00:13:17.512316 6 log.go:172] (0xc0016e5c20) (1) Data frame handling I0507 00:13:17.512372 6 log.go:172] (0xc0016e5c20) (1) Data frame sent I0507 00:13:17.512401 6 log.go:172] (0xc002a320b0) (0xc0016e5c20) Stream removed, broadcasting: 1 I0507 00:13:17.512445 6 log.go:172] (0xc002a320b0) Go away received I0507 00:13:17.512502 6 log.go:172] (0xc002a320b0) (0xc0016e5c20) Stream removed, broadcasting: 1 I0507 00:13:17.512517 6 log.go:172] (0xc002a320b0) (0xc0016e5d60) Stream removed, broadcasting: 3 I0507 00:13:17.512532 6 log.go:172] (0xc002a320b0) (0xc001a03cc0) Stream removed, broadcasting: 5 May 7 00:13:17.512: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:13:17.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8971" for this suite. • [SLOW TEST:33.285 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4377,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:13:17.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-28c3d720-0c92-42c6-9091-dfa6d6715174 May 7 00:13:17.805: INFO: Pod name my-hostname-basic-28c3d720-0c92-42c6-9091-dfa6d6715174: Found 0 pods out of 1 May 7 00:13:22.827: INFO: Pod name my-hostname-basic-28c3d720-0c92-42c6-9091-dfa6d6715174: Found 1 pods out of 1 May 7 00:13:22.827: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-28c3d720-0c92-42c6-9091-dfa6d6715174" are running May 7 00:13:22.838: INFO: Pod "my-hostname-basic-28c3d720-0c92-42c6-9091-dfa6d6715174-vskvh" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 00:13:17 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 00:13:20 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 00:13:20 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-07 00:13:17 +0000 UTC Reason: Message:}]) May 7 00:13:22.838: INFO: Trying to dial the pod May 7 00:13:27.849: INFO: Controller my-hostname-basic-28c3d720-0c92-42c6-9091-dfa6d6715174: Got expected result from replica 1 [my-hostname-basic-28c3d720-0c92-42c6-9091-dfa6d6715174-vskvh]: "my-hostname-basic-28c3d720-0c92-42c6-9091-dfa6d6715174-vskvh", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:13:27.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9665" for this suite. • [SLOW TEST:10.183 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":264,"skipped":4392,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:13:27.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 7 00:13:27.912: INFO: Created pod &Pod{ObjectMeta:{dns-1216 dns-1216 /api/v1/namespaces/dns-1216/pods/dns-1216 4e437717-1ac6-494a-9fec-90196917a83f 14043901 0 2020-05-07 00:13:27 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-56vxs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-56vxs,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-56vxs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... May 7 00:13:31.920: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1216 PodName:dns-1216 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 7 00:13:31.920: INFO: >>> kubeConfig: /root/.kube/config I0507 00:13:31.960140 6 log.go:172] (0xc002928580) (0xc002875040) Create stream I0507 00:13:31.960164 6 log.go:172] (0xc002928580) (0xc002875040) Stream added, broadcasting: 1 I0507 00:13:31.962849 6 log.go:172] (0xc002928580) Reply frame received for 1 I0507 00:13:31.962929 6 log.go:172] (0xc002928580) (0xc0022900a0) Create stream I0507 00:13:31.962948 6 log.go:172] (0xc002928580) (0xc0022900a0) Stream added, broadcasting: 3 I0507 00:13:31.964099 6 log.go:172] (0xc002928580) Reply frame received for 3 I0507 00:13:31.964154 6 log.go:172] (0xc002928580) (0xc001a03ea0) Create stream I0507 00:13:31.964172 6 log.go:172] (0xc002928580) (0xc001a03ea0) Stream added, broadcasting: 5 I0507 00:13:31.965444 6 log.go:172] (0xc002928580) Reply frame received for 5 I0507 00:13:32.060635 6 log.go:172] (0xc002928580) Data frame received for 3 I0507 00:13:32.060665 6 log.go:172] (0xc0022900a0) (3) Data frame handling I0507 00:13:32.060682 6 log.go:172] (0xc0022900a0) (3) Data frame sent I0507 00:13:32.062006 6 log.go:172] (0xc002928580) Data frame received for 3 I0507 00:13:32.062059 6 log.go:172] (0xc0022900a0) (3) Data frame handling I0507 00:13:32.062096 6 log.go:172] (0xc002928580) Data frame received for 5 I0507 00:13:32.062111 6 log.go:172] (0xc001a03ea0) (5) Data frame handling I0507 00:13:32.063921 6 log.go:172] (0xc002928580) Data frame received for 1 I0507 00:13:32.063952 6 log.go:172] (0xc002875040) (1) Data frame handling I0507 00:13:32.063989 6 log.go:172] (0xc002875040) (1) Data frame sent I0507 00:13:32.064030 6 log.go:172] (0xc002928580) (0xc002875040) Stream removed, broadcasting: 1 I0507 00:13:32.064112 6 log.go:172] (0xc002928580) (0xc002875040) Stream removed, broadcasting: 1 I0507 00:13:32.064136 6 log.go:172] (0xc002928580) (0xc0022900a0) Stream removed, broadcasting: 3 I0507 00:13:32.064143 6 log.go:172] (0xc002928580) (0xc001a03ea0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 7 00:13:32.064: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1216 PodName:dns-1216 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} I0507 00:13:32.064203 6 log.go:172] (0xc002928580) Go away received May 7 00:13:32.064: INFO: >>> kubeConfig: /root/.kube/config I0507 00:13:32.094475 6 log.go:172] (0xc00302c370) (0xc001a5e5a0) Create stream I0507 00:13:32.094512 6 log.go:172] (0xc00302c370) (0xc001a5e5a0) Stream added, broadcasting: 1 I0507 00:13:32.096339 6 log.go:172] (0xc00302c370) Reply frame received for 1 I0507 00:13:32.096379 6 log.go:172] (0xc00302c370) (0xc002875180) Create stream I0507 00:13:32.096387 6 log.go:172] (0xc00302c370) (0xc002875180) Stream added, broadcasting: 3 I0507 00:13:32.097782 6 log.go:172] (0xc00302c370) Reply frame received for 3 I0507 00:13:32.097845 6 log.go:172] (0xc00302c370) (0xc002875220) Create stream I0507 00:13:32.097871 6 log.go:172] (0xc00302c370) (0xc002875220) Stream added, broadcasting: 5 I0507 00:13:32.099000 6 log.go:172] (0xc00302c370) Reply frame received for 5 I0507 00:13:32.174888 6 log.go:172] (0xc00302c370) Data frame received for 3 I0507 00:13:32.174911 6 log.go:172] (0xc002875180) (3) Data frame handling I0507 00:13:32.174921 6 log.go:172] (0xc002875180) (3) Data frame sent I0507 00:13:32.175688 6 log.go:172] (0xc00302c370) Data frame received for 3 I0507 00:13:32.175708 6 log.go:172] (0xc002875180) (3) Data frame handling I0507 00:13:32.175901 6 log.go:172] (0xc00302c370) Data frame received for 5 I0507 00:13:32.175920 6 log.go:172] (0xc002875220) (5) Data frame handling I0507 00:13:32.178251 6 log.go:172] (0xc00302c370) Data frame received for 1 I0507 00:13:32.178305 6 log.go:172] (0xc001a5e5a0) (1) Data frame handling I0507 00:13:32.178326 6 log.go:172] (0xc001a5e5a0) (1) Data frame sent I0507 00:13:32.178338 6 log.go:172] (0xc00302c370) (0xc001a5e5a0) Stream removed, broadcasting: 1 I0507 00:13:32.178350 6 log.go:172] (0xc00302c370) Go away received I0507 00:13:32.178488 6 log.go:172] (0xc00302c370) (0xc001a5e5a0) Stream removed, broadcasting: 1 I0507 00:13:32.178506 6 log.go:172] (0xc00302c370) (0xc002875180) Stream removed, broadcasting: 3 I0507 00:13:32.178517 6 log.go:172] (0xc00302c370) (0xc002875220) Stream removed, broadcasting: 5 May 7 00:13:32.178: INFO: Deleting pod dns-1216... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:13:32.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1216" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":265,"skipped":4401,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:13:32.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition May 7 00:13:32.584: INFO: Waiting up to 5m0s for pod "var-expansion-575a0b47-cde4-4fbf-b56f-a3536bb6d4a8" in namespace "var-expansion-8249" to be "success or failure" May 7 00:13:32.593: INFO: Pod "var-expansion-575a0b47-cde4-4fbf-b56f-a3536bb6d4a8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.791345ms May 7 00:13:34.597: INFO: Pod "var-expansion-575a0b47-cde4-4fbf-b56f-a3536bb6d4a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012764602s May 7 00:13:36.602: INFO: Pod "var-expansion-575a0b47-cde4-4fbf-b56f-a3536bb6d4a8": Phase="Running", Reason="", readiness=true. Elapsed: 4.018039749s May 7 00:13:38.607: INFO: Pod "var-expansion-575a0b47-cde4-4fbf-b56f-a3536bb6d4a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.022714272s STEP: Saw pod success May 7 00:13:38.607: INFO: Pod "var-expansion-575a0b47-cde4-4fbf-b56f-a3536bb6d4a8" satisfied condition "success or failure" May 7 00:13:38.610: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-575a0b47-cde4-4fbf-b56f-a3536bb6d4a8 container dapi-container: STEP: delete the pod May 7 00:13:38.657: INFO: Waiting for pod var-expansion-575a0b47-cde4-4fbf-b56f-a3536bb6d4a8 to disappear May 7 00:13:38.667: INFO: Pod var-expansion-575a0b47-cde4-4fbf-b56f-a3536bb6d4a8 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:13:38.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8249" for this suite. • [SLOW TEST:6.466 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":266,"skipped":4428,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:13:38.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 7 00:13:38.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 7 00:13:39.373: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-07T00:13:39Z generation:1 name:name1 resourceVersion:14043995 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:8578ac0e-b662-4866-a1cc-07e7f3241e4c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 7 00:13:49.379: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-07T00:13:49Z generation:1 name:name2 resourceVersion:14044033 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:20baf207-51a2-4d82-8010-c9bb077bf6f5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 7 00:13:59.385: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-07T00:13:39Z generation:2 name:name1 resourceVersion:14044063 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:8578ac0e-b662-4866-a1cc-07e7f3241e4c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 7 00:14:09.434: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-07T00:13:49Z generation:2 name:name2 resourceVersion:14044091 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:20baf207-51a2-4d82-8010-c9bb077bf6f5] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 7 00:14:19.441: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-07T00:13:39Z generation:2 name:name1 resourceVersion:14044119 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:8578ac0e-b662-4866-a1cc-07e7f3241e4c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 7 00:14:29.450: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-07T00:13:49Z generation:2 name:name2 resourceVersion:14044150 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:20baf207-51a2-4d82-8010-c9bb077bf6f5] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:14:39.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-2286" for this suite. • [SLOW TEST:61.275 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":267,"skipped":4481,"failed":0} SS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:14:39.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4054, will wait for the garbage collector to delete the pods May 7 00:14:46.331: INFO: Deleting Job.batch foo took: 6.532241ms May 7 00:14:47.032: INFO: Terminating Job.batch foo pods took: 700.211887ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:15:29.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4054" for this suite. • [SLOW TEST:49.570 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":268,"skipped":4483,"failed":0} SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:15:29.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 7 00:15:29.839: INFO: Waiting up to 5m0s for pod "downward-api-ab2b15a5-5e95-482d-b000-2004f53e71e1" in namespace "downward-api-9731" to be "success or failure" May 7 00:15:29.858: INFO: Pod "downward-api-ab2b15a5-5e95-482d-b000-2004f53e71e1": Phase="Pending", Reason="", readiness=false. Elapsed: 19.654558ms May 7 00:15:32.218: INFO: Pod "downward-api-ab2b15a5-5e95-482d-b000-2004f53e71e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.379509509s May 7 00:15:34.222: INFO: Pod "downward-api-ab2b15a5-5e95-482d-b000-2004f53e71e1": Phase="Running", Reason="", readiness=true. Elapsed: 4.383913788s May 7 00:15:36.227: INFO: Pod "downward-api-ab2b15a5-5e95-482d-b000-2004f53e71e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.388016417s STEP: Saw pod success May 7 00:15:36.227: INFO: Pod "downward-api-ab2b15a5-5e95-482d-b000-2004f53e71e1" satisfied condition "success or failure" May 7 00:15:36.229: INFO: Trying to get logs from node jerma-worker pod downward-api-ab2b15a5-5e95-482d-b000-2004f53e71e1 container dapi-container: STEP: delete the pod May 7 00:15:36.880: INFO: Waiting for pod downward-api-ab2b15a5-5e95-482d-b000-2004f53e71e1 to disappear May 7 00:15:37.111: INFO: Pod downward-api-ab2b15a5-5e95-482d-b000-2004f53e71e1 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:15:37.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9731" for this suite. • [SLOW TEST:8.019 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4493,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:15:37.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 7 00:15:48.014: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 7 00:15:48.040: INFO: Pod pod-with-poststart-http-hook still exists May 7 00:15:50.041: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 7 00:15:50.051: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:15:50.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4876" for this suite. • [SLOW TEST:12.497 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4502,"failed":0} [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:15:50.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 7 00:15:50.130: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc089caf-dd24-4ed0-8569-a4c10181a79a" in namespace "projected-1698" to be "success or failure" May 7 00:15:50.192: INFO: Pod "downwardapi-volume-dc089caf-dd24-4ed0-8569-a4c10181a79a": Phase="Pending", Reason="", readiness=false. Elapsed: 61.98135ms May 7 00:15:52.196: INFO: Pod "downwardapi-volume-dc089caf-dd24-4ed0-8569-a4c10181a79a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065984612s May 7 00:15:54.200: INFO: Pod "downwardapi-volume-dc089caf-dd24-4ed0-8569-a4c10181a79a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06983499s STEP: Saw pod success May 7 00:15:54.200: INFO: Pod "downwardapi-volume-dc089caf-dd24-4ed0-8569-a4c10181a79a" satisfied condition "success or failure" May 7 00:15:54.207: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-dc089caf-dd24-4ed0-8569-a4c10181a79a container client-container: STEP: delete the pod May 7 00:15:54.947: INFO: Waiting for pod downwardapi-volume-dc089caf-dd24-4ed0-8569-a4c10181a79a to disappear May 7 00:15:55.042: INFO: Pod downwardapi-volume-dc089caf-dd24-4ed0-8569-a4c10181a79a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:15:55.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1698" for this suite. • [SLOW TEST:5.147 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4502,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:15:55.205: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 7 00:15:56.334: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8015 /api/v1/namespaces/watch-8015/configmaps/e2e-watch-test-watch-closed ab705137-1372-416b-9451-c31a5f8a981e 14044526 0 2020-05-07 00:15:56 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 7 00:15:56.334: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8015 /api/v1/namespaces/watch-8015/configmaps/e2e-watch-test-watch-closed ab705137-1372-416b-9451-c31a5f8a981e 14044529 0 2020-05-07 00:15:56 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 7 00:15:56.822: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8015 /api/v1/namespaces/watch-8015/configmaps/e2e-watch-test-watch-closed ab705137-1372-416b-9451-c31a5f8a981e 14044530 0 2020-05-07 00:15:56 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 7 00:15:56.822: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8015 /api/v1/namespaces/watch-8015/configmaps/e2e-watch-test-watch-closed ab705137-1372-416b-9451-c31a5f8a981e 14044531 0 2020-05-07 00:15:56 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:15:56.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8015" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":272,"skipped":4517,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:15:56.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 7 00:15:57.170: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:16:09.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8919" for this suite. • [SLOW TEST:12.488 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4523,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:16:09.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-x5mc STEP: Creating a pod to test atomic-volume-subpath May 7 00:16:09.622: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-x5mc" in namespace "subpath-4310" to be "success or failure" May 7 00:16:09.625: INFO: Pod "pod-subpath-test-downwardapi-x5mc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.046882ms May 7 00:16:11.629: INFO: Pod "pod-subpath-test-downwardapi-x5mc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007295754s May 7 00:16:13.634: INFO: Pod "pod-subpath-test-downwardapi-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 4.012236946s May 7 00:16:15.639: INFO: Pod "pod-subpath-test-downwardapi-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 6.016815267s May 7 00:16:17.643: INFO: Pod "pod-subpath-test-downwardapi-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 8.021153184s May 7 00:16:19.647: INFO: Pod "pod-subpath-test-downwardapi-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 10.025406525s May 7 00:16:21.655: INFO: Pod "pod-subpath-test-downwardapi-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 12.032792003s May 7 00:16:23.658: INFO: Pod "pod-subpath-test-downwardapi-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 14.036225176s May 7 00:16:25.661: INFO: Pod "pod-subpath-test-downwardapi-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 16.039555465s May 7 00:16:27.665: INFO: Pod "pod-subpath-test-downwardapi-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 18.043047663s May 7 00:16:29.668: INFO: Pod "pod-subpath-test-downwardapi-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 20.046281888s May 7 00:16:31.672: INFO: Pod "pod-subpath-test-downwardapi-x5mc": Phase="Running", Reason="", readiness=true. Elapsed: 22.050425102s May 7 00:16:33.676: INFO: Pod "pod-subpath-test-downwardapi-x5mc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054286169s STEP: Saw pod success May 7 00:16:33.676: INFO: Pod "pod-subpath-test-downwardapi-x5mc" satisfied condition "success or failure" May 7 00:16:33.679: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-x5mc container test-container-subpath-downwardapi-x5mc: STEP: delete the pod May 7 00:16:33.767: INFO: Waiting for pod pod-subpath-test-downwardapi-x5mc to disappear May 7 00:16:33.798: INFO: Pod pod-subpath-test-downwardapi-x5mc no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-x5mc May 7 00:16:33.798: INFO: Deleting pod "pod-subpath-test-downwardapi-x5mc" in namespace "subpath-4310" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:16:33.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4310" for this suite. • [SLOW TEST:24.320 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":274,"skipped":4526,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:16:33.808: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 7 00:16:34.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-8589' May 7 00:16:34.227: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 7 00:16:34.227: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created May 7 00:16:34.251: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 7 00:16:34.258: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 7 00:16:34.450: INFO: scanned /root for discovery docs: May 7 00:16:34.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-8589' May 7 00:16:51.877: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 7 00:16:51.877: INFO: stdout: "Created e2e-test-httpd-rc-c330a9337b77c71e395fefde9d96b165\nScaling up e2e-test-httpd-rc-c330a9337b77c71e395fefde9d96b165 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-c330a9337b77c71e395fefde9d96b165 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-c330a9337b77c71e395fefde9d96b165 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" May 7 00:16:51.877: INFO: stdout: "Created e2e-test-httpd-rc-c330a9337b77c71e395fefde9d96b165\nScaling up e2e-test-httpd-rc-c330a9337b77c71e395fefde9d96b165 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-c330a9337b77c71e395fefde9d96b165 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-c330a9337b77c71e395fefde9d96b165 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. May 7 00:16:51.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-8589' May 7 00:16:51.970: INFO: stderr: "" May 7 00:16:51.970: INFO: stdout: "e2e-test-httpd-rc-c330a9337b77c71e395fefde9d96b165-ffg8t " May 7 00:16:51.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-c330a9337b77c71e395fefde9d96b165-ffg8t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8589' May 7 00:16:52.063: INFO: stderr: "" May 7 00:16:52.064: INFO: stdout: "true" May 7 00:16:52.064: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-c330a9337b77c71e395fefde9d96b165-ffg8t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8589' May 7 00:16:52.162: INFO: stderr: "" May 7 00:16:52.162: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" May 7 00:16:52.162: INFO: e2e-test-httpd-rc-c330a9337b77c71e395fefde9d96b165-ffg8t is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 May 7 00:16:52.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-8589' May 7 00:16:52.272: INFO: stderr: "" May 7 00:16:52.272: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:16:52.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8589" for this suite. • [SLOW TEST:18.487 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":275,"skipped":4539,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:16:52.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:17:31.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1541" for this suite. • [SLOW TEST:39.098 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4540,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:17:31.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:17:33.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2193" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":277,"skipped":4554,"failed":0} SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 7 00:17:33.916: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 7 00:17:34.904: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 7 00:17:34.961: INFO: Waiting for terminating namespaces to be deleted... May 7 00:17:34.963: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 7 00:17:34.968: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 7 00:17:34.968: INFO: Container kindnet-cni ready: true, restart count 0 May 7 00:17:34.968: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 7 00:17:34.968: INFO: Container kube-proxy ready: true, restart count 0 May 7 00:17:34.968: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 7 00:17:34.972: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 7 00:17:34.972: INFO: Container kindnet-cni ready: true, restart count 0 May 7 00:17:34.972: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 7 00:17:34.972: INFO: Container kube-bench ready: false, restart count 0 May 7 00:17:34.972: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 7 00:17:34.972: INFO: Container kube-proxy ready: true, restart count 0 May 7 00:17:34.972: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 7 00:17:34.972: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160c96df71d9fc3b], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 7 00:17:35.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8218" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":278,"skipped":4559,"failed":0} SSSSSMay 7 00:17:36.178: INFO: Running AfterSuite actions on all nodes May 7 00:17:36.178: INFO: Running AfterSuite actions on node 1 May 7 00:17:36.178: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 5250.794 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS