I0425 21:07:09.233958 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0425 21:07:09.234304 6 e2e.go:109] Starting e2e run "c2eede41-49d8-4d0e-b302-fbbda5718c8c" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1587848828 - Will randomize all specs Will run 278 of 4842 specs Apr 25 21:07:09.297: INFO: >>> kubeConfig: /root/.kube/config Apr 25 21:07:09.302: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 25 21:07:09.325: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 25 21:07:09.360: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 25 21:07:09.360: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 25 21:07:09.360: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 25 21:07:09.369: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 25 21:07:09.369: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 25 21:07:09.369: INFO: e2e test version: v1.17.4 Apr 25 21:07:09.371: INFO: kube-apiserver version: v1.17.2 Apr 25 21:07:09.371: INFO: >>> kubeConfig: /root/.kube/config Apr 25 21:07:09.376: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:07:09.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset Apr 25 21:07:09.443: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1209 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Apr 25 21:07:09.509: INFO: Found 0 stateful pods, waiting for 3 Apr 25 21:07:19.514: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 25 21:07:19.514: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 25 21:07:19.514: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 25 21:07:19.541: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 25 21:07:29.664: INFO: Updating stateful set ss2 Apr 25 21:07:29.726: INFO: Waiting for Pod statefulset-1209/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Apr 25 21:07:39.888: INFO: Found 2 stateful pods, waiting for 3 Apr 25 21:07:49.893: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 25 21:07:49.893: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 25 21:07:49.893: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 25 21:07:49.923: INFO: Updating stateful set ss2 Apr 25 21:07:49.976: INFO: Waiting for Pod statefulset-1209/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 25 21:08:00.003: INFO: Updating stateful set ss2 Apr 25 21:08:00.042: INFO: Waiting for StatefulSet statefulset-1209/ss2 to complete update Apr 25 21:08:00.042: INFO: Waiting for Pod statefulset-1209/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 25 21:08:10.050: INFO: Deleting all statefulset in ns statefulset-1209 Apr 25 21:08:10.054: INFO: Scaling statefulset ss2 to 0 Apr 25 21:08:30.078: INFO: Waiting for statefulset status.replicas updated to 0 Apr 25 21:08:30.081: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:08:30.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1209" for this suite. • [SLOW TEST:80.728 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":1,"skipped":31,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:08:30.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 25 21:08:30.157: INFO: Waiting up to 5m0s for pod "pod-981a39d2-959e-4c0b-8aa3-1bede851f4ce" in namespace "emptydir-6174" to be "success or failure" Apr 25 21:08:30.161: INFO: Pod "pod-981a39d2-959e-4c0b-8aa3-1bede851f4ce": Phase="Pending", Reason="", readiness=false. Elapsed: 3.460397ms Apr 25 21:08:32.164: INFO: Pod "pod-981a39d2-959e-4c0b-8aa3-1bede851f4ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006985448s Apr 25 21:08:34.169: INFO: Pod "pod-981a39d2-959e-4c0b-8aa3-1bede851f4ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011414049s STEP: Saw pod success Apr 25 21:08:34.169: INFO: Pod "pod-981a39d2-959e-4c0b-8aa3-1bede851f4ce" satisfied condition "success or failure" Apr 25 21:08:34.172: INFO: Trying to get logs from node jerma-worker pod pod-981a39d2-959e-4c0b-8aa3-1bede851f4ce container test-container: STEP: delete the pod Apr 25 21:08:34.205: INFO: Waiting for pod pod-981a39d2-959e-4c0b-8aa3-1bede851f4ce to disappear Apr 25 21:08:34.208: INFO: Pod pod-981a39d2-959e-4c0b-8aa3-1bede851f4ce no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:08:34.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6174" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":38,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:08:34.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-83926c83-d34e-4760-a8c3-e245711f87eb STEP: Creating secret with name s-test-opt-upd-661dfab3-109c-4d74-8215-418568d7a1a0 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-83926c83-d34e-4760-a8c3-e245711f87eb STEP: Updating secret s-test-opt-upd-661dfab3-109c-4d74-8215-418568d7a1a0 STEP: Creating secret with name s-test-opt-create-ae0c42b2-a95f-4730-8a64-ebfd65aa9999 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:08:44.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1201" for this suite. • [SLOW TEST:10.297 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":60,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:08:44.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 25 21:08:44.622: INFO: Waiting up to 5m0s for pod "pod-2f2bfa2e-dcb5-4b8a-9f35-3d918e9b083b" in namespace "emptydir-5403" to be "success or failure" Apr 25 21:08:44.627: INFO: Pod "pod-2f2bfa2e-dcb5-4b8a-9f35-3d918e9b083b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.314471ms Apr 25 21:08:46.631: INFO: Pod "pod-2f2bfa2e-dcb5-4b8a-9f35-3d918e9b083b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009246442s Apr 25 21:08:48.635: INFO: Pod "pod-2f2bfa2e-dcb5-4b8a-9f35-3d918e9b083b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013454542s STEP: Saw pod success Apr 25 21:08:48.635: INFO: Pod "pod-2f2bfa2e-dcb5-4b8a-9f35-3d918e9b083b" satisfied condition "success or failure" Apr 25 21:08:48.638: INFO: Trying to get logs from node jerma-worker pod pod-2f2bfa2e-dcb5-4b8a-9f35-3d918e9b083b container test-container: STEP: delete the pod Apr 25 21:08:48.681: INFO: Waiting for pod pod-2f2bfa2e-dcb5-4b8a-9f35-3d918e9b083b to disappear Apr 25 21:08:48.688: INFO: Pod pod-2f2bfa2e-dcb5-4b8a-9f35-3d918e9b083b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:08:48.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5403" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":67,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:08:48.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-7v8xv in namespace proxy-696 I0425 21:08:48.855400 6 runners.go:189] Created replication controller with name: proxy-service-7v8xv, namespace: proxy-696, replica count: 1 I0425 21:08:49.905806 6 runners.go:189] proxy-service-7v8xv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0425 21:08:50.906046 6 runners.go:189] proxy-service-7v8xv Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0425 21:08:51.906252 6 runners.go:189] proxy-service-7v8xv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0425 21:08:52.906480 6 runners.go:189] proxy-service-7v8xv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0425 21:08:53.906734 6 runners.go:189] proxy-service-7v8xv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0425 21:08:54.906959 6 runners.go:189] proxy-service-7v8xv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0425 21:08:55.907173 6 runners.go:189] proxy-service-7v8xv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0425 21:08:56.907386 6 runners.go:189] proxy-service-7v8xv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0425 21:08:57.907580 6 runners.go:189] proxy-service-7v8xv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0425 21:08:58.907881 6 runners.go:189] proxy-service-7v8xv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0425 21:08:59.908128 6 runners.go:189] proxy-service-7v8xv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0425 21:09:00.908328 6 runners.go:189] proxy-service-7v8xv Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0425 21:09:01.908534 6 runners.go:189] proxy-service-7v8xv Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 25 21:09:01.911: INFO: setup took 13.166868507s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 25 21:09:01.916: INFO: (0) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:162/proxy/: bar (200; 4.271241ms) Apr 25 21:09:01.923: INFO: (0) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:1080/proxy/: t... (200; 10.98136ms) Apr 25 21:09:01.923: INFO: (0) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:1080/proxy/: testtest (200; 15.525651ms) Apr 25 21:09:01.928: INFO: (0) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:443/proxy/: test (200; 9.111818ms) Apr 25 21:09:01.941: INFO: (1) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:443/proxy/: t... (200; 10.206375ms) Apr 25 21:09:01.942: INFO: (1) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:162/proxy/: bar (200; 10.239539ms) Apr 25 21:09:01.942: INFO: (1) /api/v1/namespaces/proxy-696/services/http:proxy-service-7v8xv:portname2/proxy/: bar (200; 10.259224ms) Apr 25 21:09:01.942: INFO: (1) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:460/proxy/: tls baz (200; 10.185803ms) Apr 25 21:09:01.942: INFO: (1) /api/v1/namespaces/proxy-696/services/proxy-service-7v8xv:portname1/proxy/: foo (200; 10.495206ms) Apr 25 21:09:01.943: INFO: (1) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:160/proxy/: foo (200; 10.577987ms) Apr 25 21:09:01.943: INFO: (1) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:1080/proxy/: testtesttest (200; 7.337957ms) Apr 25 21:09:01.951: INFO: (2) /api/v1/namespaces/proxy-696/services/http:proxy-service-7v8xv:portname2/proxy/: bar (200; 7.13127ms) Apr 25 21:09:01.951: INFO: (2) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:460/proxy/: tls baz (200; 7.226069ms) Apr 25 21:09:01.951: INFO: (2) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:1080/proxy/: t... (200; 7.557397ms) Apr 25 21:09:01.951: INFO: (2) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:443/proxy/: test (200; 4.858174ms) Apr 25 21:09:01.956: INFO: (3) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:1080/proxy/: t... (200; 4.960134ms) Apr 25 21:09:01.956: INFO: (3) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:1080/proxy/: testtestt... (200; 4.289996ms) Apr 25 21:09:01.964: INFO: (4) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx/proxy/: test (200; 4.265572ms) Apr 25 21:09:01.964: INFO: (4) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:160/proxy/: foo (200; 4.826748ms) Apr 25 21:09:01.965: INFO: (4) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:162/proxy/: bar (200; 5.082557ms) Apr 25 21:09:01.965: INFO: (4) /api/v1/namespaces/proxy-696/services/https:proxy-service-7v8xv:tlsportname1/proxy/: tls baz (200; 5.183646ms) Apr 25 21:09:01.965: INFO: (4) /api/v1/namespaces/proxy-696/services/https:proxy-service-7v8xv:tlsportname2/proxy/: tls qux (200; 5.130424ms) Apr 25 21:09:01.965: INFO: (4) /api/v1/namespaces/proxy-696/services/proxy-service-7v8xv:portname1/proxy/: foo (200; 5.359048ms) Apr 25 21:09:01.965: INFO: (4) /api/v1/namespaces/proxy-696/services/proxy-service-7v8xv:portname2/proxy/: bar (200; 5.396505ms) Apr 25 21:09:01.965: INFO: (4) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:460/proxy/: tls baz (200; 5.483318ms) Apr 25 21:09:01.965: INFO: (4) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:462/proxy/: tls qux (200; 5.504996ms) Apr 25 21:09:01.965: INFO: (4) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:160/proxy/: foo (200; 5.454132ms) Apr 25 21:09:01.965: INFO: (4) /api/v1/namespaces/proxy-696/services/http:proxy-service-7v8xv:portname1/proxy/: foo (200; 5.718804ms) Apr 25 21:09:01.966: INFO: (4) /api/v1/namespaces/proxy-696/services/http:proxy-service-7v8xv:portname2/proxy/: bar (200; 5.986583ms) Apr 25 21:09:01.970: INFO: (5) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:462/proxy/: tls qux (200; 4.334583ms) Apr 25 21:09:01.970: INFO: (5) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:1080/proxy/: testtest (200; 4.427275ms) Apr 25 21:09:01.970: INFO: (5) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:1080/proxy/: t... (200; 4.452859ms) Apr 25 21:09:01.970: INFO: (5) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:443/proxy/: t... (200; 3.564118ms) Apr 25 21:09:01.975: INFO: (6) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:460/proxy/: tls baz (200; 3.661044ms) Apr 25 21:09:01.975: INFO: (6) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:443/proxy/: test (200; 3.67763ms) Apr 25 21:09:01.975: INFO: (6) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:162/proxy/: bar (200; 3.780306ms) Apr 25 21:09:01.976: INFO: (6) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:160/proxy/: foo (200; 3.894235ms) Apr 25 21:09:01.976: INFO: (6) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:1080/proxy/: testtestt... (200; 4.438446ms) Apr 25 21:09:01.983: INFO: (7) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx/proxy/: test (200; 4.58346ms) Apr 25 21:09:01.983: INFO: (7) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:443/proxy/: t... (200; 24.508498ms) Apr 25 21:09:02.011: INFO: (8) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:162/proxy/: bar (200; 24.624555ms) Apr 25 21:09:02.011: INFO: (8) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:160/proxy/: foo (200; 24.53777ms) Apr 25 21:09:02.011: INFO: (8) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:462/proxy/: tls qux (200; 24.98981ms) Apr 25 21:09:02.011: INFO: (8) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:460/proxy/: tls baz (200; 24.976667ms) Apr 25 21:09:02.011: INFO: (8) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:162/proxy/: bar (200; 25.280225ms) Apr 25 21:09:02.011: INFO: (8) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:1080/proxy/: testtest (200; 25.355029ms) Apr 25 21:09:02.012: INFO: (8) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:443/proxy/: test (200; 4.391134ms) Apr 25 21:09:02.021: INFO: (9) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:1080/proxy/: testt... (200; 6.052124ms) Apr 25 21:09:02.021: INFO: (9) /api/v1/namespaces/proxy-696/services/https:proxy-service-7v8xv:tlsportname2/proxy/: tls qux (200; 5.92586ms) Apr 25 21:09:02.021: INFO: (9) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:443/proxy/: test (200; 7.764363ms) Apr 25 21:09:02.030: INFO: (10) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:162/proxy/: bar (200; 8.009652ms) Apr 25 21:09:02.030: INFO: (10) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:160/proxy/: foo (200; 7.854589ms) Apr 25 21:09:02.030: INFO: (10) /api/v1/namespaces/proxy-696/services/proxy-service-7v8xv:portname2/proxy/: bar (200; 8.072076ms) Apr 25 21:09:02.032: INFO: (10) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:160/proxy/: foo (200; 9.65569ms) Apr 25 21:09:02.032: INFO: (10) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:162/proxy/: bar (200; 10.398952ms) Apr 25 21:09:02.032: INFO: (10) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:460/proxy/: tls baz (200; 9.832573ms) Apr 25 21:09:02.034: INFO: (10) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:1080/proxy/: testt... (200; 12.06862ms) Apr 25 21:09:02.034: INFO: (10) /api/v1/namespaces/proxy-696/services/proxy-service-7v8xv:portname1/proxy/: foo (200; 12.071635ms) Apr 25 21:09:02.035: INFO: (10) /api/v1/namespaces/proxy-696/services/https:proxy-service-7v8xv:tlsportname1/proxy/: tls baz (200; 12.371334ms) Apr 25 21:09:02.038: INFO: (11) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:162/proxy/: bar (200; 3.528434ms) Apr 25 21:09:02.039: INFO: (11) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:1080/proxy/: t... (200; 3.746439ms) Apr 25 21:09:02.040: INFO: (11) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:160/proxy/: foo (200; 5.19904ms) Apr 25 21:09:02.040: INFO: (11) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:443/proxy/: testtest (200; 5.694347ms) Apr 25 21:09:02.041: INFO: (11) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:460/proxy/: tls baz (200; 5.688453ms) Apr 25 21:09:02.041: INFO: (11) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:462/proxy/: tls qux (200; 5.790716ms) Apr 25 21:09:02.041: INFO: (11) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:160/proxy/: foo (200; 5.732319ms) Apr 25 21:09:02.041: INFO: (11) /api/v1/namespaces/proxy-696/services/proxy-service-7v8xv:portname1/proxy/: foo (200; 6.06728ms) Apr 25 21:09:02.041: INFO: (11) /api/v1/namespaces/proxy-696/services/http:proxy-service-7v8xv:portname1/proxy/: foo (200; 6.49478ms) Apr 25 21:09:02.041: INFO: (11) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:162/proxy/: bar (200; 6.507659ms) Apr 25 21:09:02.041: INFO: (11) /api/v1/namespaces/proxy-696/services/https:proxy-service-7v8xv:tlsportname1/proxy/: tls baz (200; 6.360452ms) Apr 25 21:09:02.041: INFO: (11) /api/v1/namespaces/proxy-696/services/http:proxy-service-7v8xv:portname2/proxy/: bar (200; 6.642808ms) Apr 25 21:09:02.042: INFO: (11) /api/v1/namespaces/proxy-696/services/https:proxy-service-7v8xv:tlsportname2/proxy/: tls qux (200; 6.803765ms) Apr 25 21:09:02.042: INFO: (11) /api/v1/namespaces/proxy-696/services/proxy-service-7v8xv:portname2/proxy/: bar (200; 6.89705ms) Apr 25 21:09:02.044: INFO: (12) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:160/proxy/: foo (200; 2.478736ms) Apr 25 21:09:02.044: INFO: (12) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:162/proxy/: bar (200; 2.539762ms) Apr 25 21:09:02.045: INFO: (12) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:1080/proxy/: t... (200; 3.140539ms) Apr 25 21:09:02.046: INFO: (12) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:162/proxy/: bar (200; 3.759314ms) Apr 25 21:09:02.046: INFO: (12) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:160/proxy/: foo (200; 3.678481ms) Apr 25 21:09:02.046: INFO: (12) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:1080/proxy/: testtest (200; 4.103298ms) Apr 25 21:09:02.046: INFO: (12) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:462/proxy/: tls qux (200; 4.087467ms) Apr 25 21:09:02.046: INFO: (12) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:460/proxy/: tls baz (200; 4.17993ms) Apr 25 21:09:02.046: INFO: (12) /api/v1/namespaces/proxy-696/services/proxy-service-7v8xv:portname2/proxy/: bar (200; 4.699911ms) Apr 25 21:09:02.047: INFO: (12) /api/v1/namespaces/proxy-696/services/http:proxy-service-7v8xv:portname2/proxy/: bar (200; 4.916715ms) Apr 25 21:09:02.047: INFO: (12) /api/v1/namespaces/proxy-696/services/proxy-service-7v8xv:portname1/proxy/: foo (200; 4.986789ms) Apr 25 21:09:02.049: INFO: (13) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:460/proxy/: tls baz (200; 2.186996ms) Apr 25 21:09:02.050: INFO: (13) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:1080/proxy/: t... (200; 3.085035ms) Apr 25 21:09:02.050: INFO: (13) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx/proxy/: test (200; 3.157809ms) Apr 25 21:09:02.050: INFO: (13) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:160/proxy/: foo (200; 3.225802ms) Apr 25 21:09:02.051: INFO: (13) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:160/proxy/: foo (200; 3.683548ms) Apr 25 21:09:02.051: INFO: (13) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:162/proxy/: bar (200; 4.381909ms) Apr 25 21:09:02.051: INFO: (13) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:462/proxy/: tls qux (200; 4.254316ms) Apr 25 21:09:02.052: INFO: (13) /api/v1/namespaces/proxy-696/services/http:proxy-service-7v8xv:portname2/proxy/: bar (200; 4.638398ms) Apr 25 21:09:02.052: INFO: (13) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:443/proxy/: testtesttest (200; 5.813486ms) Apr 25 21:09:02.058: INFO: (14) /api/v1/namespaces/proxy-696/services/proxy-service-7v8xv:portname2/proxy/: bar (200; 5.853858ms) Apr 25 21:09:02.058: INFO: (14) /api/v1/namespaces/proxy-696/services/http:proxy-service-7v8xv:portname1/proxy/: foo (200; 5.947383ms) Apr 25 21:09:02.058: INFO: (14) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:462/proxy/: tls qux (200; 6.010144ms) Apr 25 21:09:02.058: INFO: (14) /api/v1/namespaces/proxy-696/services/proxy-service-7v8xv:portname1/proxy/: foo (200; 6.103522ms) Apr 25 21:09:02.058: INFO: (14) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:1080/proxy/: t... (200; 6.011093ms) Apr 25 21:09:02.058: INFO: (14) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:162/proxy/: bar (200; 6.292698ms) Apr 25 21:09:02.061: INFO: (15) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:462/proxy/: tls qux (200; 2.277393ms) Apr 25 21:09:02.061: INFO: (15) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:443/proxy/: test (200; 2.526956ms) Apr 25 21:09:02.061: INFO: (15) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:1080/proxy/: t... (200; 2.739629ms) Apr 25 21:09:02.063: INFO: (15) /api/v1/namespaces/proxy-696/services/http:proxy-service-7v8xv:portname2/proxy/: bar (200; 4.785991ms) Apr 25 21:09:02.063: INFO: (15) /api/v1/namespaces/proxy-696/services/https:proxy-service-7v8xv:tlsportname2/proxy/: tls qux (200; 4.858912ms) Apr 25 21:09:02.064: INFO: (15) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:162/proxy/: bar (200; 4.959896ms) Apr 25 21:09:02.064: INFO: (15) /api/v1/namespaces/proxy-696/services/https:proxy-service-7v8xv:tlsportname1/proxy/: tls baz (200; 5.056001ms) Apr 25 21:09:02.064: INFO: (15) /api/v1/namespaces/proxy-696/services/proxy-service-7v8xv:portname1/proxy/: foo (200; 5.028621ms) Apr 25 21:09:02.064: INFO: (15) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:1080/proxy/: testt... (200; 2.192387ms) Apr 25 21:09:02.067: INFO: (16) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx/proxy/: test (200; 3.023717ms) Apr 25 21:09:02.068: INFO: (16) /api/v1/namespaces/proxy-696/services/http:proxy-service-7v8xv:portname2/proxy/: bar (200; 3.553657ms) Apr 25 21:09:02.068: INFO: (16) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:1080/proxy/: testt... (200; 3.020236ms) Apr 25 21:09:02.072: INFO: (17) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:162/proxy/: bar (200; 3.107083ms) Apr 25 21:09:02.072: INFO: (17) /api/v1/namespaces/proxy-696/services/https:proxy-service-7v8xv:tlsportname1/proxy/: tls baz (200; 3.257936ms) Apr 25 21:09:02.072: INFO: (17) /api/v1/namespaces/proxy-696/services/https:proxy-service-7v8xv:tlsportname2/proxy/: tls qux (200; 3.228446ms) Apr 25 21:09:02.073: INFO: (17) /api/v1/namespaces/proxy-696/services/proxy-service-7v8xv:portname1/proxy/: foo (200; 3.455386ms) Apr 25 21:09:02.073: INFO: (17) /api/v1/namespaces/proxy-696/services/proxy-service-7v8xv:portname2/proxy/: bar (200; 3.485891ms) Apr 25 21:09:02.073: INFO: (17) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:443/proxy/: test (200; 3.972559ms) Apr 25 21:09:02.073: INFO: (17) /api/v1/namespaces/proxy-696/services/http:proxy-service-7v8xv:portname1/proxy/: foo (200; 4.138754ms) Apr 25 21:09:02.073: INFO: (17) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:1080/proxy/: testtestt... (200; 3.925123ms) Apr 25 21:09:02.077: INFO: (18) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:443/proxy/: test (200; 3.876375ms) Apr 25 21:09:02.077: INFO: (18) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:162/proxy/: bar (200; 3.935366ms) Apr 25 21:09:02.077: INFO: (18) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:162/proxy/: bar (200; 3.952865ms) Apr 25 21:09:02.077: INFO: (18) /api/v1/namespaces/proxy-696/services/http:proxy-service-7v8xv:portname2/proxy/: bar (200; 3.847883ms) Apr 25 21:09:02.077: INFO: (18) /api/v1/namespaces/proxy-696/services/http:proxy-service-7v8xv:portname1/proxy/: foo (200; 4.20486ms) Apr 25 21:09:02.078: INFO: (18) /api/v1/namespaces/proxy-696/services/https:proxy-service-7v8xv:tlsportname2/proxy/: tls qux (200; 4.427059ms) Apr 25 21:09:02.078: INFO: (18) /api/v1/namespaces/proxy-696/services/https:proxy-service-7v8xv:tlsportname1/proxy/: tls baz (200; 4.495045ms) Apr 25 21:09:02.078: INFO: (18) /api/v1/namespaces/proxy-696/services/proxy-service-7v8xv:portname1/proxy/: foo (200; 4.548791ms) Apr 25 21:09:02.081: INFO: (19) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:162/proxy/: bar (200; 3.374163ms) Apr 25 21:09:02.081: INFO: (19) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:462/proxy/: tls qux (200; 3.568018ms) Apr 25 21:09:02.082: INFO: (19) /api/v1/namespaces/proxy-696/pods/http:proxy-service-7v8xv-wc7cx:1080/proxy/: t... (200; 3.636604ms) Apr 25 21:09:02.082: INFO: (19) /api/v1/namespaces/proxy-696/pods/proxy-service-7v8xv-wc7cx:1080/proxy/: testtest (200; 3.606794ms) Apr 25 21:09:02.082: INFO: (19) /api/v1/namespaces/proxy-696/pods/https:proxy-service-7v8xv-wc7cx:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 21:09:09.759: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 21:09:11.770: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723445749, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723445749, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723445749, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723445749, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 21:09:14.805: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:09:14.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7985" for this suite. STEP: Destroying namespace "webhook-7985-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.724 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":6,"skipped":88,"failed":0} S ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:09:15.074: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-8d7ed6a0-9252-4950-aa7f-813ddcca485d [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:09:15.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6377" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":7,"skipped":89,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:09:15.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-28140bf8-c24b-493e-b9d9-6d3a9cd4e063 in namespace container-probe-8036 Apr 25 21:09:19.505: INFO: Started pod busybox-28140bf8-c24b-493e-b9d9-6d3a9cd4e063 in namespace container-probe-8036 STEP: checking the pod's current state and verifying that restartCount is present Apr 25 21:09:19.508: INFO: Initial restart count of pod busybox-28140bf8-c24b-493e-b9d9-6d3a9cd4e063 is 0 Apr 25 21:10:13.620: INFO: Restart count of pod container-probe-8036/busybox-28140bf8-c24b-493e-b9d9-6d3a9cd4e063 is now 1 (54.112164525s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:10:13.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8036" for this suite. • [SLOW TEST:58.462 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":115,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:10:13.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Apr 25 21:10:13.700: INFO: >>> kubeConfig: /root/.kube/config Apr 25 21:10:16.690: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:10:27.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3407" for this suite. • [SLOW TEST:13.698 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":9,"skipped":127,"failed":0} [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:10:27.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:10:27.434: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 25 21:10:27.461: INFO: Number of nodes with available pods: 0 Apr 25 21:10:27.461: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 25 21:10:27.494: INFO: Number of nodes with available pods: 0 Apr 25 21:10:27.494: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:10:28.507: INFO: Number of nodes with available pods: 0 Apr 25 21:10:28.507: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:10:29.498: INFO: Number of nodes with available pods: 0 Apr 25 21:10:29.499: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:10:30.497: INFO: Number of nodes with available pods: 1 Apr 25 21:10:30.497: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 25 21:10:30.555: INFO: Number of nodes with available pods: 1 Apr 25 21:10:30.555: INFO: Number of running nodes: 0, number of available pods: 1 Apr 25 21:10:31.560: INFO: Number of nodes with available pods: 0 Apr 25 21:10:31.560: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 25 21:10:31.578: INFO: Number of nodes with available pods: 0 Apr 25 21:10:31.578: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:10:32.582: INFO: Number of nodes with available pods: 0 Apr 25 21:10:32.582: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:10:33.582: INFO: Number of nodes with available pods: 0 Apr 25 21:10:33.582: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:10:34.597: INFO: Number of nodes with available pods: 0 Apr 25 21:10:34.597: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:10:35.582: INFO: Number of nodes with available pods: 0 Apr 25 21:10:35.583: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:10:36.582: INFO: Number of nodes with available pods: 0 Apr 25 21:10:36.582: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:10:37.583: INFO: Number of nodes with available pods: 0 Apr 25 21:10:37.583: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:10:38.591: INFO: Number of nodes with available pods: 0 Apr 25 21:10:38.591: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:10:39.583: INFO: Number of nodes with available pods: 0 Apr 25 21:10:39.583: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:10:40.582: INFO: Number of nodes with available pods: 0 Apr 25 21:10:40.582: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:10:41.583: INFO: Number of nodes with available pods: 0 Apr 25 21:10:41.583: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:10:42.583: INFO: Number of nodes with available pods: 1 Apr 25 21:10:42.583: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4938, will wait for the garbage collector to delete the pods Apr 25 21:10:42.652: INFO: Deleting DaemonSet.extensions daemon-set took: 9.19158ms Apr 25 21:10:42.952: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.237786ms Apr 25 21:10:46.070: INFO: Number of nodes with available pods: 0 Apr 25 21:10:46.070: INFO: Number of running nodes: 0, number of available pods: 0 Apr 25 21:10:46.074: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4938/daemonsets","resourceVersion":"11010625"},"items":null} Apr 25 21:10:46.077: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4938/pods","resourceVersion":"11010625"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:10:46.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4938" for this suite. • [SLOW TEST:18.749 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":10,"skipped":127,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:10:46.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-7b95e3e7-8f02-4f68-b2f1-b5fc38af6133 STEP: Creating a pod to test consume configMaps Apr 25 21:10:46.162: INFO: Waiting up to 5m0s for pod "pod-configmaps-bd8b24ba-7af7-48c3-af13-2c8e802fd288" in namespace "configmap-4370" to be "success or failure" Apr 25 21:10:46.203: INFO: Pod "pod-configmaps-bd8b24ba-7af7-48c3-af13-2c8e802fd288": Phase="Pending", Reason="", readiness=false. Elapsed: 40.47637ms Apr 25 21:10:48.207: INFO: Pod "pod-configmaps-bd8b24ba-7af7-48c3-af13-2c8e802fd288": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045063192s Apr 25 21:10:50.212: INFO: Pod "pod-configmaps-bd8b24ba-7af7-48c3-af13-2c8e802fd288": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049304825s STEP: Saw pod success Apr 25 21:10:50.212: INFO: Pod "pod-configmaps-bd8b24ba-7af7-48c3-af13-2c8e802fd288" satisfied condition "success or failure" Apr 25 21:10:50.215: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-bd8b24ba-7af7-48c3-af13-2c8e802fd288 container configmap-volume-test: STEP: delete the pod Apr 25 21:10:50.267: INFO: Waiting for pod pod-configmaps-bd8b24ba-7af7-48c3-af13-2c8e802fd288 to disappear Apr 25 21:10:50.281: INFO: Pod pod-configmaps-bd8b24ba-7af7-48c3-af13-2c8e802fd288 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:10:50.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4370" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":133,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:10:50.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 25 21:10:50.365: INFO: Waiting up to 5m0s for pod "pod-427e79b9-2e53-4cdf-ba50-c5ea4bdb728e" in namespace "emptydir-3782" to be "success or failure" Apr 25 21:10:50.368: INFO: Pod "pod-427e79b9-2e53-4cdf-ba50-c5ea4bdb728e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.335967ms Apr 25 21:10:52.372: INFO: Pod "pod-427e79b9-2e53-4cdf-ba50-c5ea4bdb728e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007047358s Apr 25 21:10:54.377: INFO: Pod "pod-427e79b9-2e53-4cdf-ba50-c5ea4bdb728e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011793399s STEP: Saw pod success Apr 25 21:10:54.377: INFO: Pod "pod-427e79b9-2e53-4cdf-ba50-c5ea4bdb728e" satisfied condition "success or failure" Apr 25 21:10:54.380: INFO: Trying to get logs from node jerma-worker2 pod pod-427e79b9-2e53-4cdf-ba50-c5ea4bdb728e container test-container: STEP: delete the pod Apr 25 21:10:54.418: INFO: Waiting for pod pod-427e79b9-2e53-4cdf-ba50-c5ea4bdb728e to disappear Apr 25 21:10:54.423: INFO: Pod pod-427e79b9-2e53-4cdf-ba50-c5ea4bdb728e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:10:54.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3782" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":12,"skipped":145,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:10:54.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0425 21:11:06.736265 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 25 21:11:06.736: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:11:06.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6421" for this suite. • [SLOW TEST:12.313 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":13,"skipped":160,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:11:06.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 25 21:11:06.794: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:11:12.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8618" for this suite. • [SLOW TEST:5.827 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":14,"skipped":175,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:11:12.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 25 21:11:12.953: INFO: Waiting up to 5m0s for pod "pod-0bca7a3b-a414-40bc-a502-10b238a49ece" in namespace "emptydir-1062" to be "success or failure" Apr 25 21:11:13.060: INFO: Pod "pod-0bca7a3b-a414-40bc-a502-10b238a49ece": Phase="Pending", Reason="", readiness=false. Elapsed: 106.762551ms Apr 25 21:11:15.064: INFO: Pod "pod-0bca7a3b-a414-40bc-a502-10b238a49ece": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110638317s Apr 25 21:11:17.068: INFO: Pod "pod-0bca7a3b-a414-40bc-a502-10b238a49ece": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.115114351s STEP: Saw pod success Apr 25 21:11:17.068: INFO: Pod "pod-0bca7a3b-a414-40bc-a502-10b238a49ece" satisfied condition "success or failure" Apr 25 21:11:17.071: INFO: Trying to get logs from node jerma-worker2 pod pod-0bca7a3b-a414-40bc-a502-10b238a49ece container test-container: STEP: delete the pod Apr 25 21:11:17.127: INFO: Waiting for pod pod-0bca7a3b-a414-40bc-a502-10b238a49ece to disappear Apr 25 21:11:17.142: INFO: Pod pod-0bca7a3b-a414-40bc-a502-10b238a49ece no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:11:17.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1062" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":179,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:11:17.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-834d7169-79cb-4008-94f3-c4ab9b457c3c in namespace container-probe-1463 Apr 25 21:11:21.296: INFO: Started pod test-webserver-834d7169-79cb-4008-94f3-c4ab9b457c3c in namespace container-probe-1463 STEP: checking the pod's current state and verifying that restartCount is present Apr 25 21:11:21.299: INFO: Initial restart count of pod test-webserver-834d7169-79cb-4008-94f3-c4ab9b457c3c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:15:21.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1463" for this suite. • [SLOW TEST:244.747 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":185,"failed":0} [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:15:21.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-484 STEP: creating replication controller nodeport-test in namespace services-484 I0425 21:15:22.206999 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-484, replica count: 2 I0425 21:15:25.257491 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0425 21:15:28.257725 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 25 21:15:28.257: INFO: Creating new exec pod Apr 25 21:15:33.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-484 execpodzx6s2 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Apr 25 21:15:35.719: INFO: stderr: "I0425 21:15:35.634255 36 log.go:172] (0xc000105550) (0xc0007117c0) Create stream\nI0425 21:15:35.634315 36 log.go:172] (0xc000105550) (0xc0007117c0) Stream added, broadcasting: 1\nI0425 21:15:35.636651 36 log.go:172] (0xc000105550) Reply frame received for 1\nI0425 21:15:35.636691 36 log.go:172] (0xc000105550) (0xc000acc0a0) Create stream\nI0425 21:15:35.636711 36 log.go:172] (0xc000105550) (0xc000acc0a0) Stream added, broadcasting: 3\nI0425 21:15:35.637861 36 log.go:172] (0xc000105550) Reply frame received for 3\nI0425 21:15:35.637891 36 log.go:172] (0xc000105550) (0xc0008a20a0) Create stream\nI0425 21:15:35.637900 36 log.go:172] (0xc000105550) (0xc0008a20a0) Stream added, broadcasting: 5\nI0425 21:15:35.638899 36 log.go:172] (0xc000105550) Reply frame received for 5\nI0425 21:15:35.709927 36 log.go:172] (0xc000105550) Data frame received for 5\nI0425 21:15:35.709959 36 log.go:172] (0xc0008a20a0) (5) Data frame handling\nI0425 21:15:35.709979 36 log.go:172] (0xc0008a20a0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0425 21:15:35.710269 36 log.go:172] (0xc000105550) Data frame received for 5\nI0425 21:15:35.710300 36 log.go:172] (0xc0008a20a0) (5) Data frame handling\nI0425 21:15:35.710327 36 log.go:172] (0xc0008a20a0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0425 21:15:35.710582 36 log.go:172] (0xc000105550) Data frame received for 3\nI0425 21:15:35.710596 36 log.go:172] (0xc000acc0a0) (3) Data frame handling\nI0425 21:15:35.710917 36 log.go:172] (0xc000105550) Data frame received for 5\nI0425 21:15:35.710934 36 log.go:172] (0xc0008a20a0) (5) Data frame handling\nI0425 21:15:35.712299 36 log.go:172] (0xc000105550) Data frame received for 1\nI0425 21:15:35.712314 36 log.go:172] (0xc0007117c0) (1) Data frame handling\nI0425 21:15:35.712323 36 log.go:172] (0xc0007117c0) (1) Data frame sent\nI0425 21:15:35.712335 36 log.go:172] (0xc000105550) (0xc0007117c0) Stream removed, broadcasting: 1\nI0425 21:15:35.712343 36 log.go:172] (0xc000105550) Go away received\nI0425 21:15:35.712672 36 log.go:172] (0xc000105550) (0xc0007117c0) Stream removed, broadcasting: 1\nI0425 21:15:35.712687 36 log.go:172] (0xc000105550) (0xc000acc0a0) Stream removed, broadcasting: 3\nI0425 21:15:35.712694 36 log.go:172] (0xc000105550) (0xc0008a20a0) Stream removed, broadcasting: 5\n" Apr 25 21:15:35.719: INFO: stdout: "" Apr 25 21:15:35.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-484 execpodzx6s2 -- /bin/sh -x -c nc -zv -t -w 2 10.98.22.39 80' Apr 25 21:15:35.925: INFO: stderr: "I0425 21:15:35.847217 64 log.go:172] (0xc0000f6370) (0xc0006f3d60) Create stream\nI0425 21:15:35.847288 64 log.go:172] (0xc0000f6370) (0xc0006f3d60) Stream added, broadcasting: 1\nI0425 21:15:35.849826 64 log.go:172] (0xc0000f6370) Reply frame received for 1\nI0425 21:15:35.849868 64 log.go:172] (0xc0000f6370) (0xc0007a94a0) Create stream\nI0425 21:15:35.849878 64 log.go:172] (0xc0000f6370) (0xc0007a94a0) Stream added, broadcasting: 3\nI0425 21:15:35.850780 64 log.go:172] (0xc0000f6370) Reply frame received for 3\nI0425 21:15:35.850808 64 log.go:172] (0xc0000f6370) (0xc0007a9540) Create stream\nI0425 21:15:35.850816 64 log.go:172] (0xc0000f6370) (0xc0007a9540) Stream added, broadcasting: 5\nI0425 21:15:35.851616 64 log.go:172] (0xc0000f6370) Reply frame received for 5\nI0425 21:15:35.917953 64 log.go:172] (0xc0000f6370) Data frame received for 3\nI0425 21:15:35.918004 64 log.go:172] (0xc0007a94a0) (3) Data frame handling\nI0425 21:15:35.918037 64 log.go:172] (0xc0000f6370) Data frame received for 5\nI0425 21:15:35.918054 64 log.go:172] (0xc0007a9540) (5) Data frame handling\nI0425 21:15:35.918073 64 log.go:172] (0xc0007a9540) (5) Data frame sent\nI0425 21:15:35.918088 64 log.go:172] (0xc0000f6370) Data frame received for 5\nI0425 21:15:35.918100 64 log.go:172] (0xc0007a9540) (5) Data frame handling\n+ nc -zv -t -w 2 10.98.22.39 80\nConnection to 10.98.22.39 80 port [tcp/http] succeeded!\nI0425 21:15:35.919712 64 log.go:172] (0xc0000f6370) Data frame received for 1\nI0425 21:15:35.919756 64 log.go:172] (0xc0006f3d60) (1) Data frame handling\nI0425 21:15:35.919797 64 log.go:172] (0xc0006f3d60) (1) Data frame sent\nI0425 21:15:35.919838 64 log.go:172] (0xc0000f6370) (0xc0006f3d60) Stream removed, broadcasting: 1\nI0425 21:15:35.919869 64 log.go:172] (0xc0000f6370) Go away received\nI0425 21:15:35.920400 64 log.go:172] (0xc0000f6370) (0xc0006f3d60) Stream removed, broadcasting: 1\nI0425 21:15:35.920424 64 log.go:172] (0xc0000f6370) (0xc0007a94a0) Stream removed, broadcasting: 3\nI0425 21:15:35.920436 64 log.go:172] (0xc0000f6370) (0xc0007a9540) Stream removed, broadcasting: 5\n" Apr 25 21:15:35.925: INFO: stdout: "" Apr 25 21:15:35.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-484 execpodzx6s2 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32597' Apr 25 21:15:36.134: INFO: stderr: "I0425 21:15:36.057602 84 log.go:172] (0xc0003d8000) (0xc0006d19a0) Create stream\nI0425 21:15:36.057675 84 log.go:172] (0xc0003d8000) (0xc0006d19a0) Stream added, broadcasting: 1\nI0425 21:15:36.060483 84 log.go:172] (0xc0003d8000) Reply frame received for 1\nI0425 21:15:36.060534 84 log.go:172] (0xc0003d8000) (0xc000af4000) Create stream\nI0425 21:15:36.060556 84 log.go:172] (0xc0003d8000) (0xc000af4000) Stream added, broadcasting: 3\nI0425 21:15:36.061634 84 log.go:172] (0xc0003d8000) Reply frame received for 3\nI0425 21:15:36.061667 84 log.go:172] (0xc0003d8000) (0xc0006d1b80) Create stream\nI0425 21:15:36.061677 84 log.go:172] (0xc0003d8000) (0xc0006d1b80) Stream added, broadcasting: 5\nI0425 21:15:36.062470 84 log.go:172] (0xc0003d8000) Reply frame received for 5\nI0425 21:15:36.127617 84 log.go:172] (0xc0003d8000) Data frame received for 5\nI0425 21:15:36.127650 84 log.go:172] (0xc0006d1b80) (5) Data frame handling\nI0425 21:15:36.127659 84 log.go:172] (0xc0006d1b80) (5) Data frame sent\nI0425 21:15:36.127664 84 log.go:172] (0xc0003d8000) Data frame received for 5\nI0425 21:15:36.127670 84 log.go:172] (0xc0006d1b80) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 32597\nConnection to 172.17.0.10 32597 port [tcp/32597] succeeded!\nI0425 21:15:36.127682 84 log.go:172] (0xc0003d8000) Data frame received for 3\nI0425 21:15:36.127749 84 log.go:172] (0xc000af4000) (3) Data frame handling\nI0425 21:15:36.128789 84 log.go:172] (0xc0003d8000) Data frame received for 1\nI0425 21:15:36.128810 84 log.go:172] (0xc0006d19a0) (1) Data frame handling\nI0425 21:15:36.128823 84 log.go:172] (0xc0006d19a0) (1) Data frame sent\nI0425 21:15:36.128836 84 log.go:172] (0xc0003d8000) (0xc0006d19a0) Stream removed, broadcasting: 1\nI0425 21:15:36.128852 84 log.go:172] (0xc0003d8000) Go away received\nI0425 21:15:36.129278 84 log.go:172] (0xc0003d8000) (0xc0006d19a0) Stream removed, broadcasting: 1\nI0425 21:15:36.129295 84 log.go:172] (0xc0003d8000) (0xc000af4000) Stream removed, broadcasting: 3\nI0425 21:15:36.129303 84 log.go:172] (0xc0003d8000) (0xc0006d1b80) Stream removed, broadcasting: 5\n" Apr 25 21:15:36.134: INFO: stdout: "" Apr 25 21:15:36.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-484 execpodzx6s2 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32597' Apr 25 21:15:36.356: INFO: stderr: "I0425 21:15:36.275268 104 log.go:172] (0xc00011af20) (0xc000699f40) Create stream\nI0425 21:15:36.275332 104 log.go:172] (0xc00011af20) (0xc000699f40) Stream added, broadcasting: 1\nI0425 21:15:36.278122 104 log.go:172] (0xc00011af20) Reply frame received for 1\nI0425 21:15:36.278160 104 log.go:172] (0xc00011af20) (0xc0006168c0) Create stream\nI0425 21:15:36.278172 104 log.go:172] (0xc00011af20) (0xc0006168c0) Stream added, broadcasting: 3\nI0425 21:15:36.279103 104 log.go:172] (0xc00011af20) Reply frame received for 3\nI0425 21:15:36.279141 104 log.go:172] (0xc00011af20) (0xc0004d7680) Create stream\nI0425 21:15:36.279150 104 log.go:172] (0xc00011af20) (0xc0004d7680) Stream added, broadcasting: 5\nI0425 21:15:36.279998 104 log.go:172] (0xc00011af20) Reply frame received for 5\nI0425 21:15:36.349632 104 log.go:172] (0xc00011af20) Data frame received for 5\nI0425 21:15:36.349753 104 log.go:172] (0xc0004d7680) (5) Data frame handling\nI0425 21:15:36.349843 104 log.go:172] (0xc0004d7680) (5) Data frame sent\nI0425 21:15:36.349938 104 log.go:172] (0xc00011af20) Data frame received for 5\nI0425 21:15:36.349955 104 log.go:172] (0xc0004d7680) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 32597\nConnection to 172.17.0.8 32597 port [tcp/32597] succeeded!\nI0425 21:15:36.350039 104 log.go:172] (0xc00011af20) Data frame received for 3\nI0425 21:15:36.350057 104 log.go:172] (0xc0006168c0) (3) Data frame handling\nI0425 21:15:36.351690 104 log.go:172] (0xc00011af20) Data frame received for 1\nI0425 21:15:36.351713 104 log.go:172] (0xc000699f40) (1) Data frame handling\nI0425 21:15:36.351739 104 log.go:172] (0xc000699f40) (1) Data frame sent\nI0425 21:15:36.351758 104 log.go:172] (0xc00011af20) (0xc000699f40) Stream removed, broadcasting: 1\nI0425 21:15:36.351775 104 log.go:172] (0xc00011af20) Go away received\nI0425 21:15:36.352213 104 log.go:172] (0xc00011af20) (0xc000699f40) Stream removed, broadcasting: 1\nI0425 21:15:36.352239 104 log.go:172] (0xc00011af20) (0xc0006168c0) Stream removed, broadcasting: 3\nI0425 21:15:36.352253 104 log.go:172] (0xc00011af20) (0xc0004d7680) Stream removed, broadcasting: 5\n" Apr 25 21:15:36.356: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:15:36.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-484" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.467 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":17,"skipped":185,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:15:36.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 25 21:15:40.485: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:15:40.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3229" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:15:40.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:15:40.557: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:15:41.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5816" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":19,"skipped":245,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:15:41.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 25 21:15:45.363: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:15:45.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9445" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":252,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:15:45.419: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 25 21:15:45.467: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0589b6f4-fff1-48f7-bb03-71557074beea" in namespace "projected-4068" to be "success or failure" Apr 25 21:15:45.471: INFO: Pod "downwardapi-volume-0589b6f4-fff1-48f7-bb03-71557074beea": Phase="Pending", Reason="", readiness=false. Elapsed: 3.873464ms Apr 25 21:15:47.476: INFO: Pod "downwardapi-volume-0589b6f4-fff1-48f7-bb03-71557074beea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009384831s Apr 25 21:15:49.480: INFO: Pod "downwardapi-volume-0589b6f4-fff1-48f7-bb03-71557074beea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012850866s STEP: Saw pod success Apr 25 21:15:49.480: INFO: Pod "downwardapi-volume-0589b6f4-fff1-48f7-bb03-71557074beea" satisfied condition "success or failure" Apr 25 21:15:49.483: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0589b6f4-fff1-48f7-bb03-71557074beea container client-container: STEP: delete the pod Apr 25 21:15:49.573: INFO: Waiting for pod downwardapi-volume-0589b6f4-fff1-48f7-bb03-71557074beea to disappear Apr 25 21:15:49.579: INFO: Pod downwardapi-volume-0589b6f4-fff1-48f7-bb03-71557074beea no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:15:49.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4068" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":21,"skipped":270,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:15:49.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:15:49.638: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 25 21:15:49.663: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 25 21:15:54.666: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 25 21:15:54.667: INFO: Creating deployment "test-rolling-update-deployment" Apr 25 21:15:54.670: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 25 21:15:54.727: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 25 21:15:56.734: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 25 21:15:56.737: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446154, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446154, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446154, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446154, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 21:15:58.740: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 25 21:15:58.749: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-9931 /apis/apps/v1/namespaces/deployment-9931/deployments/test-rolling-update-deployment 9456036f-418c-4083-ba82-3ecdb5d90fae 11012073 1 2020-04-25 21:15:54 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005454f68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-25 21:15:54 +0000 UTC,LastTransitionTime:2020-04-25 21:15:54 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-04-25 21:15:58 +0000 UTC,LastTransitionTime:2020-04-25 21:15:54 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 25 21:15:58.752: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-9931 /apis/apps/v1/namespaces/deployment-9931/replicasets/test-rolling-update-deployment-67cf4f6444 893714f9-da3c-457f-953e-548f7315f375 11012061 1 2020-04-25 21:15:54 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 9456036f-418c-4083-ba82-3ecdb5d90fae 0xc00405be97 0xc00405be98}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00405bf08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 25 21:15:58.752: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 25 21:15:58.752: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-9931 /apis/apps/v1/namespaces/deployment-9931/replicasets/test-rolling-update-controller 837eedf6-a90a-40c7-b2eb-ae01fffbb3fd 11012070 2 2020-04-25 21:15:49 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 9456036f-418c-4083-ba82-3ecdb5d90fae 0xc00405bdc7 0xc00405bdc8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00405be28 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 25 21:15:58.756: INFO: Pod "test-rolling-update-deployment-67cf4f6444-jz68w" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-jz68w test-rolling-update-deployment-67cf4f6444- deployment-9931 /api/v1/namespaces/deployment-9931/pods/test-rolling-update-deployment-67cf4f6444-jz68w 5ad54c3c-f41a-4e72-9592-92078d892800 11012060 0 2020-04-25 21:15:54 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 893714f9-da3c-457f-953e-548f7315f375 0xc002870367 0xc002870368}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-k8x2m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-k8x2m,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-k8x2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:15:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:15:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:15:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:15:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.176,StartTime:2020-04-25 21:15:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-25 21:15:56 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://1365328908ef86d5aaaa7e05cc8a61cc35b0756d05105d00908bae77d1fc8fac,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.176,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:15:58.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9931" for this suite. • [SLOW TEST:9.175 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":22,"skipped":332,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:15:58.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Apr 25 21:15:58.866: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:16:14.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-859" for this suite. • [SLOW TEST:16.013 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":23,"skipped":334,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:16:14.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:16:14.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Apr 25 21:16:15.442: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-25T21:16:15Z generation:1 name:name1 resourceVersion:11012175 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:5b639c57-dd78-485d-b0d5-5ad5f9c9f5ad] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Apr 25 21:16:25.448: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-25T21:16:25Z generation:1 name:name2 resourceVersion:11012209 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8230fc9a-be6d-4df4-b608-b448de122972] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Apr 25 21:16:35.459: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-25T21:16:15Z generation:2 name:name1 resourceVersion:11012239 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:5b639c57-dd78-485d-b0d5-5ad5f9c9f5ad] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Apr 25 21:16:45.465: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-25T21:16:25Z generation:2 name:name2 resourceVersion:11012269 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8230fc9a-be6d-4df4-b608-b448de122972] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Apr 25 21:16:55.473: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-25T21:16:15Z generation:2 name:name1 resourceVersion:11012299 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:5b639c57-dd78-485d-b0d5-5ad5f9c9f5ad] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Apr 25 21:17:05.480: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-04-25T21:16:25Z generation:2 name:name2 resourceVersion:11012329 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8230fc9a-be6d-4df4-b608-b448de122972] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:17:15.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-784" for this suite. • [SLOW TEST:61.222 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":24,"skipped":359,"failed":0} SSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:17:16.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3176 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3176;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3176 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3176;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3176.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3176.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3176.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3176.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3176.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3176.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3176.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3176.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3176.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3176.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3176.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3176.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3176.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 40.45.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.45.40_udp@PTR;check="$$(dig +tcp +noall +answer +search 40.45.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.45.40_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3176 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3176;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3176 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3176;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3176.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3176.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3176.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3176.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3176.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3176.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3176.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3176.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3176.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3176.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3176.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3176.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3176.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 40.45.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.45.40_udp@PTR;check="$$(dig +tcp +noall +answer +search 40.45.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.45.40_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 25 21:17:22.183: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:22.187: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:22.194: INFO: Unable to read wheezy_udp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:22.197: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:22.200: INFO: Unable to read wheezy_udp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:22.202: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:22.205: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:22.207: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:22.226: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:22.229: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:22.233: INFO: Unable to read jessie_udp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:22.236: INFO: Unable to read jessie_tcp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:22.238: INFO: Unable to read jessie_udp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:22.241: INFO: Unable to read jessie_tcp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:22.244: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:22.246: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:22.262: INFO: Lookups using dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3176 wheezy_tcp@dns-test-service.dns-3176 wheezy_udp@dns-test-service.dns-3176.svc wheezy_tcp@dns-test-service.dns-3176.svc wheezy_udp@_http._tcp.dns-test-service.dns-3176.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3176.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3176 jessie_tcp@dns-test-service.dns-3176 jessie_udp@dns-test-service.dns-3176.svc jessie_tcp@dns-test-service.dns-3176.svc jessie_udp@_http._tcp.dns-test-service.dns-3176.svc jessie_tcp@_http._tcp.dns-test-service.dns-3176.svc] Apr 25 21:17:27.267: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:27.271: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:27.274: INFO: Unable to read wheezy_udp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:27.278: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:27.281: INFO: Unable to read wheezy_udp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:27.285: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:27.288: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:27.291: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:27.318: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:27.322: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:27.325: INFO: Unable to read jessie_udp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:27.327: INFO: Unable to read jessie_tcp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:27.369: INFO: Unable to read jessie_udp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:27.373: INFO: Unable to read jessie_tcp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:27.376: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:27.379: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:27.395: INFO: Lookups using dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3176 wheezy_tcp@dns-test-service.dns-3176 wheezy_udp@dns-test-service.dns-3176.svc wheezy_tcp@dns-test-service.dns-3176.svc wheezy_udp@_http._tcp.dns-test-service.dns-3176.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3176.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3176 jessie_tcp@dns-test-service.dns-3176 jessie_udp@dns-test-service.dns-3176.svc jessie_tcp@dns-test-service.dns-3176.svc jessie_udp@_http._tcp.dns-test-service.dns-3176.svc jessie_tcp@_http._tcp.dns-test-service.dns-3176.svc] Apr 25 21:17:32.266: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:32.269: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:32.272: INFO: Unable to read wheezy_udp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:32.276: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:32.279: INFO: Unable to read wheezy_udp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:32.282: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:32.285: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:32.287: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:32.308: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:32.312: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:32.315: INFO: Unable to read jessie_udp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:32.318: INFO: Unable to read jessie_tcp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:32.321: INFO: Unable to read jessie_udp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:32.324: INFO: Unable to read jessie_tcp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:32.327: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:32.331: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:32.348: INFO: Lookups using dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3176 wheezy_tcp@dns-test-service.dns-3176 wheezy_udp@dns-test-service.dns-3176.svc wheezy_tcp@dns-test-service.dns-3176.svc wheezy_udp@_http._tcp.dns-test-service.dns-3176.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3176.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3176 jessie_tcp@dns-test-service.dns-3176 jessie_udp@dns-test-service.dns-3176.svc jessie_tcp@dns-test-service.dns-3176.svc jessie_udp@_http._tcp.dns-test-service.dns-3176.svc jessie_tcp@_http._tcp.dns-test-service.dns-3176.svc] Apr 25 21:17:37.266: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:37.269: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:37.272: INFO: Unable to read wheezy_udp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:37.275: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:37.277: INFO: Unable to read wheezy_udp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:37.281: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:37.283: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:37.286: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:37.305: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:37.308: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:37.311: INFO: Unable to read jessie_udp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:37.313: INFO: Unable to read jessie_tcp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:37.316: INFO: Unable to read jessie_udp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:37.319: INFO: Unable to read jessie_tcp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:37.322: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:37.324: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:37.347: INFO: Lookups using dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3176 wheezy_tcp@dns-test-service.dns-3176 wheezy_udp@dns-test-service.dns-3176.svc wheezy_tcp@dns-test-service.dns-3176.svc wheezy_udp@_http._tcp.dns-test-service.dns-3176.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3176.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3176 jessie_tcp@dns-test-service.dns-3176 jessie_udp@dns-test-service.dns-3176.svc jessie_tcp@dns-test-service.dns-3176.svc jessie_udp@_http._tcp.dns-test-service.dns-3176.svc jessie_tcp@_http._tcp.dns-test-service.dns-3176.svc] Apr 25 21:17:42.267: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:42.270: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:42.274: INFO: Unable to read wheezy_udp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:42.276: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:42.279: INFO: Unable to read wheezy_udp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:42.282: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:42.284: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:42.287: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:42.310: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:42.313: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:42.316: INFO: Unable to read jessie_udp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:42.318: INFO: Unable to read jessie_tcp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:42.321: INFO: Unable to read jessie_udp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:42.323: INFO: Unable to read jessie_tcp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:42.326: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:42.329: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:42.348: INFO: Lookups using dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3176 wheezy_tcp@dns-test-service.dns-3176 wheezy_udp@dns-test-service.dns-3176.svc wheezy_tcp@dns-test-service.dns-3176.svc wheezy_udp@_http._tcp.dns-test-service.dns-3176.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3176.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3176 jessie_tcp@dns-test-service.dns-3176 jessie_udp@dns-test-service.dns-3176.svc jessie_tcp@dns-test-service.dns-3176.svc jessie_udp@_http._tcp.dns-test-service.dns-3176.svc jessie_tcp@_http._tcp.dns-test-service.dns-3176.svc] Apr 25 21:17:47.267: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:47.271: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:47.274: INFO: Unable to read wheezy_udp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:47.277: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:47.281: INFO: Unable to read wheezy_udp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:47.284: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:47.287: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:47.290: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:47.314: INFO: Unable to read jessie_udp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:47.317: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:47.320: INFO: Unable to read jessie_udp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:47.323: INFO: Unable to read jessie_tcp@dns-test-service.dns-3176 from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:47.325: INFO: Unable to read jessie_udp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:47.328: INFO: Unable to read jessie_tcp@dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:47.331: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:47.343: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3176.svc from pod dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb: the server could not find the requested resource (get pods dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb) Apr 25 21:17:47.360: INFO: Lookups using dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3176 wheezy_tcp@dns-test-service.dns-3176 wheezy_udp@dns-test-service.dns-3176.svc wheezy_tcp@dns-test-service.dns-3176.svc wheezy_udp@_http._tcp.dns-test-service.dns-3176.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3176.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3176 jessie_tcp@dns-test-service.dns-3176 jessie_udp@dns-test-service.dns-3176.svc jessie_tcp@dns-test-service.dns-3176.svc jessie_udp@_http._tcp.dns-test-service.dns-3176.svc jessie_tcp@_http._tcp.dns-test-service.dns-3176.svc] Apr 25 21:17:52.343: INFO: DNS probes using dns-3176/dns-test-af8be79b-4aa1-416b-a8fa-7fa4535e8fbb succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:17:52.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3176" for this suite. • [SLOW TEST:36.921 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":25,"skipped":362,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:17:52.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-1e95048e-6c34-4b64-8b5f-4c7fedde2bc0 STEP: Creating a pod to test consume configMaps Apr 25 21:17:52.997: INFO: Waiting up to 5m0s for pod "pod-configmaps-bc3800b1-b105-440f-b8e1-574fc709fa9b" in namespace "configmap-4788" to be "success or failure" Apr 25 21:17:53.017: INFO: Pod "pod-configmaps-bc3800b1-b105-440f-b8e1-574fc709fa9b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.917335ms Apr 25 21:17:55.021: INFO: Pod "pod-configmaps-bc3800b1-b105-440f-b8e1-574fc709fa9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024033579s Apr 25 21:17:57.026: INFO: Pod "pod-configmaps-bc3800b1-b105-440f-b8e1-574fc709fa9b": Phase="Running", Reason="", readiness=true. Elapsed: 4.028733307s Apr 25 21:17:59.031: INFO: Pod "pod-configmaps-bc3800b1-b105-440f-b8e1-574fc709fa9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033887428s STEP: Saw pod success Apr 25 21:17:59.031: INFO: Pod "pod-configmaps-bc3800b1-b105-440f-b8e1-574fc709fa9b" satisfied condition "success or failure" Apr 25 21:17:59.034: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-bc3800b1-b105-440f-b8e1-574fc709fa9b container configmap-volume-test: STEP: delete the pod Apr 25 21:17:59.073: INFO: Waiting for pod pod-configmaps-bc3800b1-b105-440f-b8e1-574fc709fa9b to disappear Apr 25 21:17:59.101: INFO: Pod pod-configmaps-bc3800b1-b105-440f-b8e1-574fc709fa9b no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:17:59.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4788" for this suite. • [SLOW TEST:6.188 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":378,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:17:59.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0425 21:18:09.174748 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 25 21:18:09.174: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:18:09.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6858" for this suite. • [SLOW TEST:10.071 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":27,"skipped":388,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:18:09.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:18:09.255: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:18:15.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8799" for this suite. • [SLOW TEST:6.423 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":28,"skipped":397,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:18:15.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 25 21:18:15.684: INFO: Waiting up to 5m0s for pod "downward-api-16a47163-9ed6-4ca8-9ac0-07836bb7e44d" in namespace "downward-api-1398" to be "success or failure" Apr 25 21:18:15.690: INFO: Pod "downward-api-16a47163-9ed6-4ca8-9ac0-07836bb7e44d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.700781ms Apr 25 21:18:17.694: INFO: Pod "downward-api-16a47163-9ed6-4ca8-9ac0-07836bb7e44d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009914365s Apr 25 21:18:19.698: INFO: Pod "downward-api-16a47163-9ed6-4ca8-9ac0-07836bb7e44d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014049135s STEP: Saw pod success Apr 25 21:18:19.698: INFO: Pod "downward-api-16a47163-9ed6-4ca8-9ac0-07836bb7e44d" satisfied condition "success or failure" Apr 25 21:18:19.701: INFO: Trying to get logs from node jerma-worker2 pod downward-api-16a47163-9ed6-4ca8-9ac0-07836bb7e44d container dapi-container: STEP: delete the pod Apr 25 21:18:19.756: INFO: Waiting for pod downward-api-16a47163-9ed6-4ca8-9ac0-07836bb7e44d to disappear Apr 25 21:18:19.775: INFO: Pod downward-api-16a47163-9ed6-4ca8-9ac0-07836bb7e44d no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:18:19.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1398" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":405,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:18:19.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 25 21:18:19.862: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d0a274fa-75d9-40ce-98d4-957ef654df6a" in namespace "projected-955" to be "success or failure" Apr 25 21:18:19.864: INFO: Pod "downwardapi-volume-d0a274fa-75d9-40ce-98d4-957ef654df6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.527133ms Apr 25 21:18:21.869: INFO: Pod "downwardapi-volume-d0a274fa-75d9-40ce-98d4-957ef654df6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006994938s Apr 25 21:18:23.873: INFO: Pod "downwardapi-volume-d0a274fa-75d9-40ce-98d4-957ef654df6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011397226s STEP: Saw pod success Apr 25 21:18:23.873: INFO: Pod "downwardapi-volume-d0a274fa-75d9-40ce-98d4-957ef654df6a" satisfied condition "success or failure" Apr 25 21:18:23.876: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d0a274fa-75d9-40ce-98d4-957ef654df6a container client-container: STEP: delete the pod Apr 25 21:18:23.949: INFO: Waiting for pod downwardapi-volume-d0a274fa-75d9-40ce-98d4-957ef654df6a to disappear Apr 25 21:18:23.954: INFO: Pod downwardapi-volume-d0a274fa-75d9-40ce-98d4-957ef654df6a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:18:23.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-955" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":436,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:18:23.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 25 21:18:24.035: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bbf86a7d-ff18-4337-89cc-b04b5f60ddc4" in namespace "downward-api-1219" to be "success or failure" Apr 25 21:18:24.038: INFO: Pod "downwardapi-volume-bbf86a7d-ff18-4337-89cc-b04b5f60ddc4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.372656ms Apr 25 21:18:26.042: INFO: Pod "downwardapi-volume-bbf86a7d-ff18-4337-89cc-b04b5f60ddc4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007266404s Apr 25 21:18:28.046: INFO: Pod "downwardapi-volume-bbf86a7d-ff18-4337-89cc-b04b5f60ddc4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011360617s STEP: Saw pod success Apr 25 21:18:28.046: INFO: Pod "downwardapi-volume-bbf86a7d-ff18-4337-89cc-b04b5f60ddc4" satisfied condition "success or failure" Apr 25 21:18:28.049: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-bbf86a7d-ff18-4337-89cc-b04b5f60ddc4 container client-container: STEP: delete the pod Apr 25 21:18:28.088: INFO: Waiting for pod downwardapi-volume-bbf86a7d-ff18-4337-89cc-b04b5f60ddc4 to disappear Apr 25 21:18:28.125: INFO: Pod downwardapi-volume-bbf86a7d-ff18-4337-89cc-b04b5f60ddc4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:18:28.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1219" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":442,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:18:28.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-9163/configmap-test-695b9dd2-259c-46c7-bb19-16836f771a68 STEP: Creating a pod to test consume configMaps Apr 25 21:18:28.257: INFO: Waiting up to 5m0s for pod "pod-configmaps-884737e2-7c67-40a1-b1e8-f56f3445142a" in namespace "configmap-9163" to be "success or failure" Apr 25 21:18:28.260: INFO: Pod "pod-configmaps-884737e2-7c67-40a1-b1e8-f56f3445142a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.882061ms Apr 25 21:18:30.263: INFO: Pod "pod-configmaps-884737e2-7c67-40a1-b1e8-f56f3445142a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005989664s Apr 25 21:18:32.268: INFO: Pod "pod-configmaps-884737e2-7c67-40a1-b1e8-f56f3445142a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010446108s STEP: Saw pod success Apr 25 21:18:32.268: INFO: Pod "pod-configmaps-884737e2-7c67-40a1-b1e8-f56f3445142a" satisfied condition "success or failure" Apr 25 21:18:32.271: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-884737e2-7c67-40a1-b1e8-f56f3445142a container env-test: STEP: delete the pod Apr 25 21:18:32.305: INFO: Waiting for pod pod-configmaps-884737e2-7c67-40a1-b1e8-f56f3445142a to disappear Apr 25 21:18:32.312: INFO: Pod pod-configmaps-884737e2-7c67-40a1-b1e8-f56f3445142a no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:18:32.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9163" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":32,"skipped":453,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:18:32.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:18:32.392: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 25 21:18:35.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8463 create -f -' Apr 25 21:18:38.276: INFO: stderr: "" Apr 25 21:18:38.277: INFO: stdout: "e2e-test-crd-publish-openapi-1908-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 25 21:18:38.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8463 delete e2e-test-crd-publish-openapi-1908-crds test-cr' Apr 25 21:18:38.383: INFO: stderr: "" Apr 25 21:18:38.383: INFO: stdout: "e2e-test-crd-publish-openapi-1908-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Apr 25 21:18:38.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8463 apply -f -' Apr 25 21:18:38.651: INFO: stderr: "" Apr 25 21:18:38.651: INFO: stdout: "e2e-test-crd-publish-openapi-1908-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Apr 25 21:18:38.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8463 delete e2e-test-crd-publish-openapi-1908-crds test-cr' Apr 25 21:18:38.765: INFO: stderr: "" Apr 25 21:18:38.766: INFO: stdout: "e2e-test-crd-publish-openapi-1908-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 25 21:18:38.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1908-crds' Apr 25 21:18:39.021: INFO: stderr: "" Apr 25 21:18:39.021: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-1908-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:18:41.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8463" for this suite. • [SLOW TEST:9.644 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":33,"skipped":469,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:18:41.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:18:46.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9816" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":476,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:18:46.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 25 21:18:54.208: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 25 21:18:54.217: INFO: Pod pod-with-prestop-http-hook still exists Apr 25 21:18:56.218: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 25 21:18:56.222: INFO: Pod pod-with-prestop-http-hook still exists Apr 25 21:18:58.218: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 25 21:18:58.222: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:18:58.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9650" for this suite. • [SLOW TEST:12.132 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":481,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:18:58.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 25 21:18:58.281: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 25 21:18:58.305: INFO: Waiting for terminating namespaces to be deleted... Apr 25 21:18:58.308: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 25 21:18:58.313: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 21:18:58.313: INFO: Container kindnet-cni ready: true, restart count 0 Apr 25 21:18:58.313: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 21:18:58.313: INFO: Container kube-proxy ready: true, restart count 0 Apr 25 21:18:58.313: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 25 21:18:58.318: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 21:18:58.318: INFO: Container kindnet-cni ready: true, restart count 0 Apr 25 21:18:58.318: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 25 21:18:58.318: INFO: Container kube-bench ready: false, restart count 0 Apr 25 21:18:58.318: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 21:18:58.318: INFO: Container kube-proxy ready: true, restart count 0 Apr 25 21:18:58.318: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 25 21:18:58.318: INFO: Container kube-hunter ready: false, restart count 0 Apr 25 21:18:58.318: INFO: pod-handle-http-request from container-lifecycle-hook-9650 started at 2020-04-25 21:18:46 +0000 UTC (1 container statuses recorded) Apr 25 21:18:58.318: INFO: Container pod-handle-http-request ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 Apr 25 21:18:58.459: INFO: Pod pod-handle-http-request requesting resource cpu=0m on Node jerma-worker2 Apr 25 21:18:58.459: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker Apr 25 21:18:58.459: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 Apr 25 21:18:58.459: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker Apr 25 21:18:58.459: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. Apr 25 21:18:58.459: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker Apr 25 21:18:58.465: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-5bdabfbe-5c6c-4989-a2b6-134519049ece.16092cbe12b1a738], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5632/filler-pod-5bdabfbe-5c6c-4989-a2b6-134519049ece to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-5bdabfbe-5c6c-4989-a2b6-134519049ece.16092cbe82e25993], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-5bdabfbe-5c6c-4989-a2b6-134519049ece.16092cbeb7f32cfb], Reason = [Created], Message = [Created container filler-pod-5bdabfbe-5c6c-4989-a2b6-134519049ece] STEP: Considering event: Type = [Normal], Name = [filler-pod-5bdabfbe-5c6c-4989-a2b6-134519049ece.16092cbec7cf2b22], Reason = [Started], Message = [Started container filler-pod-5bdabfbe-5c6c-4989-a2b6-134519049ece] STEP: Considering event: Type = [Normal], Name = [filler-pod-b7c3a5eb-d7a0-4753-bfce-be993410cb74.16092cbe122b6e74], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5632/filler-pod-b7c3a5eb-d7a0-4753-bfce-be993410cb74 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-b7c3a5eb-d7a0-4753-bfce-be993410cb74.16092cbe61c1b366], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-b7c3a5eb-d7a0-4753-bfce-be993410cb74.16092cbeb2e99b2b], Reason = [Created], Message = [Created container filler-pod-b7c3a5eb-d7a0-4753-bfce-be993410cb74] STEP: Considering event: Type = [Normal], Name = [filler-pod-b7c3a5eb-d7a0-4753-bfce-be993410cb74.16092cbec1b7d74a], Reason = [Started], Message = [Started container filler-pod-b7c3a5eb-d7a0-4753-bfce-be993410cb74] STEP: Considering event: Type = [Warning], Name = [additional-pod.16092cbf0229527a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:19:03.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5632" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:5.620 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":36,"skipped":495,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:19:03.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 21:19:04.837: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 21:19:06.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446344, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446344, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446344, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446344, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 21:19:08.873: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446344, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446344, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446344, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446344, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 21:19:11.903: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:19:12.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-174" for this suite. STEP: Destroying namespace "webhook-174-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.478 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":37,"skipped":500,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:19:12.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:19:12.418: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 25 21:19:12.424: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:12.429: INFO: Number of nodes with available pods: 0 Apr 25 21:19:12.429: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:19:13.456: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:13.480: INFO: Number of nodes with available pods: 0 Apr 25 21:19:13.480: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:19:14.434: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:14.438: INFO: Number of nodes with available pods: 0 Apr 25 21:19:14.438: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:19:15.439: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:15.447: INFO: Number of nodes with available pods: 1 Apr 25 21:19:15.447: INFO: Node jerma-worker2 is running more than one daemon pod Apr 25 21:19:16.435: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:16.439: INFO: Number of nodes with available pods: 2 Apr 25 21:19:16.439: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 25 21:19:16.486: INFO: Wrong image for pod: daemon-set-9fkqw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:16.486: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:16.522: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:17.558: INFO: Wrong image for pod: daemon-set-9fkqw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:17.558: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:17.562: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:18.527: INFO: Wrong image for pod: daemon-set-9fkqw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:18.527: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:18.531: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:19.549: INFO: Wrong image for pod: daemon-set-9fkqw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:19.549: INFO: Pod daemon-set-9fkqw is not available Apr 25 21:19:19.549: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:19.552: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:20.527: INFO: Wrong image for pod: daemon-set-9fkqw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:20.527: INFO: Pod daemon-set-9fkqw is not available Apr 25 21:19:20.527: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:20.531: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:21.527: INFO: Wrong image for pod: daemon-set-9fkqw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:21.527: INFO: Pod daemon-set-9fkqw is not available Apr 25 21:19:21.527: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:21.531: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:22.526: INFO: Wrong image for pod: daemon-set-9fkqw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:22.526: INFO: Pod daemon-set-9fkqw is not available Apr 25 21:19:22.526: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:22.531: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:23.527: INFO: Wrong image for pod: daemon-set-9fkqw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:23.527: INFO: Pod daemon-set-9fkqw is not available Apr 25 21:19:23.527: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:23.532: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:24.527: INFO: Wrong image for pod: daemon-set-9fkqw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:24.527: INFO: Pod daemon-set-9fkqw is not available Apr 25 21:19:24.527: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:24.532: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:25.527: INFO: Wrong image for pod: daemon-set-9fkqw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:25.527: INFO: Pod daemon-set-9fkqw is not available Apr 25 21:19:25.527: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:25.532: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:26.527: INFO: Wrong image for pod: daemon-set-9fkqw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:26.527: INFO: Pod daemon-set-9fkqw is not available Apr 25 21:19:26.527: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:26.531: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:27.526: INFO: Wrong image for pod: daemon-set-9fkqw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:27.527: INFO: Pod daemon-set-9fkqw is not available Apr 25 21:19:27.527: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:27.530: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:28.528: INFO: Wrong image for pod: daemon-set-9fkqw. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:28.528: INFO: Pod daemon-set-9fkqw is not available Apr 25 21:19:28.528: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:28.532: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:29.552: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:29.552: INFO: Pod daemon-set-q7ms9 is not available Apr 25 21:19:29.555: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:30.526: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:30.526: INFO: Pod daemon-set-q7ms9 is not available Apr 25 21:19:30.543: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:31.527: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:31.527: INFO: Pod daemon-set-q7ms9 is not available Apr 25 21:19:31.531: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:32.526: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:32.529: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:33.527: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:33.527: INFO: Pod daemon-set-nzrpb is not available Apr 25 21:19:33.534: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:34.527: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:34.527: INFO: Pod daemon-set-nzrpb is not available Apr 25 21:19:34.532: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:35.527: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:35.527: INFO: Pod daemon-set-nzrpb is not available Apr 25 21:19:35.532: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:36.527: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:36.527: INFO: Pod daemon-set-nzrpb is not available Apr 25 21:19:36.532: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:37.527: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:37.527: INFO: Pod daemon-set-nzrpb is not available Apr 25 21:19:37.531: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:38.526: INFO: Wrong image for pod: daemon-set-nzrpb. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. Apr 25 21:19:38.526: INFO: Pod daemon-set-nzrpb is not available Apr 25 21:19:38.530: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:39.551: INFO: Pod daemon-set-srk7j is not available Apr 25 21:19:39.574: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 25 21:19:39.578: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:39.602: INFO: Number of nodes with available pods: 1 Apr 25 21:19:39.602: INFO: Node jerma-worker2 is running more than one daemon pod Apr 25 21:19:40.607: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:40.610: INFO: Number of nodes with available pods: 1 Apr 25 21:19:40.610: INFO: Node jerma-worker2 is running more than one daemon pod Apr 25 21:19:41.618: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:41.627: INFO: Number of nodes with available pods: 1 Apr 25 21:19:41.627: INFO: Node jerma-worker2 is running more than one daemon pod Apr 25 21:19:42.608: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:19:42.611: INFO: Number of nodes with available pods: 2 Apr 25 21:19:42.611: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7423, will wait for the garbage collector to delete the pods Apr 25 21:19:42.692: INFO: Deleting DaemonSet.extensions daemon-set took: 13.504736ms Apr 25 21:19:42.992: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.304092ms Apr 25 21:19:49.596: INFO: Number of nodes with available pods: 0 Apr 25 21:19:49.596: INFO: Number of running nodes: 0, number of available pods: 0 Apr 25 21:19:49.599: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7423/daemonsets","resourceVersion":"11013391"},"items":null} Apr 25 21:19:49.603: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7423/pods","resourceVersion":"11013391"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:19:49.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7423" for this suite. • [SLOW TEST:37.285 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":38,"skipped":504,"failed":0} S ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:19:49.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 25 21:19:53.929: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 25 21:20:04.021: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:20:04.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-244" for this suite. • [SLOW TEST:14.413 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":39,"skipped":505,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:20:04.032: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:20:20.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5515" for this suite. • [SLOW TEST:16.263 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":40,"skipped":517,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:20:20.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-2ff2c7f2-ffdf-442d-84c4-76cb459045b3 STEP: Creating a pod to test consume configMaps Apr 25 21:20:20.356: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8c3d3d9d-63c0-4e00-b310-589688efa5e6" in namespace "projected-6710" to be "success or failure" Apr 25 21:20:20.371: INFO: Pod "pod-projected-configmaps-8c3d3d9d-63c0-4e00-b310-589688efa5e6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.675226ms Apr 25 21:20:22.375: INFO: Pod "pod-projected-configmaps-8c3d3d9d-63c0-4e00-b310-589688efa5e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018677492s Apr 25 21:20:24.379: INFO: Pod "pod-projected-configmaps-8c3d3d9d-63c0-4e00-b310-589688efa5e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022748414s STEP: Saw pod success Apr 25 21:20:24.379: INFO: Pod "pod-projected-configmaps-8c3d3d9d-63c0-4e00-b310-589688efa5e6" satisfied condition "success or failure" Apr 25 21:20:24.381: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-8c3d3d9d-63c0-4e00-b310-589688efa5e6 container projected-configmap-volume-test: STEP: delete the pod Apr 25 21:20:24.475: INFO: Waiting for pod pod-projected-configmaps-8c3d3d9d-63c0-4e00-b310-589688efa5e6 to disappear Apr 25 21:20:24.497: INFO: Pod pod-projected-configmaps-8c3d3d9d-63c0-4e00-b310-589688efa5e6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:20:24.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6710" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":535,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:20:24.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-3921 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-3921 STEP: creating replication controller externalsvc in namespace services-3921 I0425 21:20:24.738234 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-3921, replica count: 2 I0425 21:20:27.788644 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0425 21:20:30.788924 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Apr 25 21:20:30.850: INFO: Creating new exec pod Apr 25 21:20:34.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3921 execpodlhcsx -- /bin/sh -x -c nslookup nodeport-service' Apr 25 21:20:35.096: INFO: stderr: "I0425 21:20:35.005938 256 log.go:172] (0xc000aa2dc0) (0xc000a9a1e0) Create stream\nI0425 21:20:35.006010 256 log.go:172] (0xc000aa2dc0) (0xc000a9a1e0) Stream added, broadcasting: 1\nI0425 21:20:35.008821 256 log.go:172] (0xc000aa2dc0) Reply frame received for 1\nI0425 21:20:35.008874 256 log.go:172] (0xc000aa2dc0) (0xc000a76000) Create stream\nI0425 21:20:35.008891 256 log.go:172] (0xc000aa2dc0) (0xc000a76000) Stream added, broadcasting: 3\nI0425 21:20:35.010041 256 log.go:172] (0xc000aa2dc0) Reply frame received for 3\nI0425 21:20:35.010079 256 log.go:172] (0xc000aa2dc0) (0xc00065fc20) Create stream\nI0425 21:20:35.010089 256 log.go:172] (0xc000aa2dc0) (0xc00065fc20) Stream added, broadcasting: 5\nI0425 21:20:35.011118 256 log.go:172] (0xc000aa2dc0) Reply frame received for 5\nI0425 21:20:35.079236 256 log.go:172] (0xc000aa2dc0) Data frame received for 5\nI0425 21:20:35.079270 256 log.go:172] (0xc00065fc20) (5) Data frame handling\nI0425 21:20:35.079306 256 log.go:172] (0xc00065fc20) (5) Data frame sent\n+ nslookup nodeport-service\nI0425 21:20:35.087533 256 log.go:172] (0xc000aa2dc0) Data frame received for 3\nI0425 21:20:35.087555 256 log.go:172] (0xc000a76000) (3) Data frame handling\nI0425 21:20:35.087584 256 log.go:172] (0xc000a76000) (3) Data frame sent\nI0425 21:20:35.088906 256 log.go:172] (0xc000aa2dc0) Data frame received for 3\nI0425 21:20:35.088926 256 log.go:172] (0xc000a76000) (3) Data frame handling\nI0425 21:20:35.088963 256 log.go:172] (0xc000a76000) (3) Data frame sent\nI0425 21:20:35.089669 256 log.go:172] (0xc000aa2dc0) Data frame received for 5\nI0425 21:20:35.089697 256 log.go:172] (0xc00065fc20) (5) Data frame handling\nI0425 21:20:35.089713 256 log.go:172] (0xc000aa2dc0) Data frame received for 3\nI0425 21:20:35.089720 256 log.go:172] (0xc000a76000) (3) Data frame handling\nI0425 21:20:35.091475 256 log.go:172] (0xc000aa2dc0) Data frame received for 1\nI0425 21:20:35.091510 256 log.go:172] (0xc000a9a1e0) (1) Data frame handling\nI0425 21:20:35.091525 256 log.go:172] (0xc000a9a1e0) (1) Data frame sent\nI0425 21:20:35.091545 256 log.go:172] (0xc000aa2dc0) (0xc000a9a1e0) Stream removed, broadcasting: 1\nI0425 21:20:35.091580 256 log.go:172] (0xc000aa2dc0) Go away received\nI0425 21:20:35.091873 256 log.go:172] (0xc000aa2dc0) (0xc000a9a1e0) Stream removed, broadcasting: 1\nI0425 21:20:35.091888 256 log.go:172] (0xc000aa2dc0) (0xc000a76000) Stream removed, broadcasting: 3\nI0425 21:20:35.091894 256 log.go:172] (0xc000aa2dc0) (0xc00065fc20) Stream removed, broadcasting: 5\n" Apr 25 21:20:35.096: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3921.svc.cluster.local\tcanonical name = externalsvc.services-3921.svc.cluster.local.\nName:\texternalsvc.services-3921.svc.cluster.local\nAddress: 10.109.1.115\n\n" STEP: deleting ReplicationController externalsvc in namespace services-3921, will wait for the garbage collector to delete the pods Apr 25 21:20:35.156: INFO: Deleting ReplicationController externalsvc took: 6.085024ms Apr 25 21:20:35.456: INFO: Terminating ReplicationController externalsvc pods took: 300.234057ms Apr 25 21:20:49.606: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:20:49.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3921" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:25.155 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":42,"skipped":562,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:20:49.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 25 21:20:49.721: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 25 21:20:58.765: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:20:58.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8113" for this suite. • [SLOW TEST:9.115 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":587,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:20:58.777: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:21:20.886: INFO: Container started at 2020-04-25 21:21:01 +0000 UTC, pod became ready at 2020-04-25 21:21:20 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:21:20.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4120" for this suite. • [SLOW TEST:22.118 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":44,"skipped":592,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:21:20.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2834 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-2834 Apr 25 21:21:21.020: INFO: Found 0 stateful pods, waiting for 1 Apr 25 21:21:31.025: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 25 21:21:31.047: INFO: Deleting all statefulset in ns statefulset-2834 Apr 25 21:21:31.067: INFO: Scaling statefulset ss to 0 Apr 25 21:21:41.135: INFO: Waiting for statefulset status.replicas updated to 0 Apr 25 21:21:41.137: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:21:41.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2834" for this suite. • [SLOW TEST:20.261 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":45,"skipped":619,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:21:41.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6766.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6766.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6766.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6766.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6766.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6766.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6766.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6766.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6766.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6766.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 25 21:21:47.280: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:47.283: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:47.287: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:47.289: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:47.297: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:47.299: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:47.302: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:47.305: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:47.310: INFO: Lookups using dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6766.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6766.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local jessie_udp@dns-test-service-2.dns-6766.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6766.svc.cluster.local] Apr 25 21:21:52.315: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:52.319: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:52.323: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:52.327: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:52.336: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:52.338: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:52.341: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:52.343: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:52.348: INFO: Lookups using dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6766.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6766.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local jessie_udp@dns-test-service-2.dns-6766.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6766.svc.cluster.local] Apr 25 21:21:57.315: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:57.319: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:57.322: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:57.326: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:57.339: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:57.342: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:57.345: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:57.349: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:21:57.354: INFO: Lookups using dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6766.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6766.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local jessie_udp@dns-test-service-2.dns-6766.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6766.svc.cluster.local] Apr 25 21:22:02.314: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:02.317: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:02.321: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:02.324: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:02.356: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:02.359: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:02.362: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:02.365: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:02.371: INFO: Lookups using dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6766.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6766.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local jessie_udp@dns-test-service-2.dns-6766.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6766.svc.cluster.local] Apr 25 21:22:07.314: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:07.318: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:07.322: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:07.325: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:07.341: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:07.344: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:07.347: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:07.350: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:07.356: INFO: Lookups using dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6766.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6766.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local jessie_udp@dns-test-service-2.dns-6766.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6766.svc.cluster.local] Apr 25 21:22:12.314: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:12.318: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:12.321: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:12.324: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:12.332: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:12.335: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:12.337: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:12.340: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6766.svc.cluster.local from pod dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb: the server could not find the requested resource (get pods dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb) Apr 25 21:22:12.346: INFO: Lookups using dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6766.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6766.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6766.svc.cluster.local jessie_udp@dns-test-service-2.dns-6766.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6766.svc.cluster.local] Apr 25 21:22:17.381: INFO: DNS probes using dns-6766/dns-test-de25a6c1-328a-4802-a7a2-d7e15f296ebb succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:22:17.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6766" for this suite. • [SLOW TEST:36.383 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":46,"skipped":627,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:22:17.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 25 21:22:18.022: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c5d80def-d9b8-495d-808f-ed89c783daf8" in namespace "downward-api-7717" to be "success or failure" Apr 25 21:22:18.066: INFO: Pod "downwardapi-volume-c5d80def-d9b8-495d-808f-ed89c783daf8": Phase="Pending", Reason="", readiness=false. Elapsed: 44.833355ms Apr 25 21:22:20.070: INFO: Pod "downwardapi-volume-c5d80def-d9b8-495d-808f-ed89c783daf8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048539086s Apr 25 21:22:22.075: INFO: Pod "downwardapi-volume-c5d80def-d9b8-495d-808f-ed89c783daf8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05293271s STEP: Saw pod success Apr 25 21:22:22.075: INFO: Pod "downwardapi-volume-c5d80def-d9b8-495d-808f-ed89c783daf8" satisfied condition "success or failure" Apr 25 21:22:22.078: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-c5d80def-d9b8-495d-808f-ed89c783daf8 container client-container: STEP: delete the pod Apr 25 21:22:22.152: INFO: Waiting for pod downwardapi-volume-c5d80def-d9b8-495d-808f-ed89c783daf8 to disappear Apr 25 21:22:22.164: INFO: Pod downwardapi-volume-c5d80def-d9b8-495d-808f-ed89c783daf8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:22:22.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7717" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":628,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:22:22.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-a489d976-0ed6-4740-975b-418060e0e8c1 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:22:22.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-267" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":48,"skipped":640,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:22:22.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-564 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet Apr 25 21:22:22.365: INFO: Found 0 stateful pods, waiting for 3 Apr 25 21:22:32.370: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 25 21:22:32.370: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 25 21:22:32.370: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Apr 25 21:22:42.370: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 25 21:22:42.370: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 25 21:22:42.370: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 25 21:22:42.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-564 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 25 21:22:42.659: INFO: stderr: "I0425 21:22:42.527233 274 log.go:172] (0xc0003e8dc0) (0xc0005f7b80) Create stream\nI0425 21:22:42.527315 274 log.go:172] (0xc0003e8dc0) (0xc0005f7b80) Stream added, broadcasting: 1\nI0425 21:22:42.530375 274 log.go:172] (0xc0003e8dc0) Reply frame received for 1\nI0425 21:22:42.530413 274 log.go:172] (0xc0003e8dc0) (0xc0005f7d60) Create stream\nI0425 21:22:42.530425 274 log.go:172] (0xc0003e8dc0) (0xc0005f7d60) Stream added, broadcasting: 3\nI0425 21:22:42.531426 274 log.go:172] (0xc0003e8dc0) Reply frame received for 3\nI0425 21:22:42.531470 274 log.go:172] (0xc0003e8dc0) (0xc000b1c000) Create stream\nI0425 21:22:42.531487 274 log.go:172] (0xc0003e8dc0) (0xc000b1c000) Stream added, broadcasting: 5\nI0425 21:22:42.532555 274 log.go:172] (0xc0003e8dc0) Reply frame received for 5\nI0425 21:22:42.620425 274 log.go:172] (0xc0003e8dc0) Data frame received for 5\nI0425 21:22:42.620461 274 log.go:172] (0xc000b1c000) (5) Data frame handling\nI0425 21:22:42.620480 274 log.go:172] (0xc000b1c000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0425 21:22:42.652852 274 log.go:172] (0xc0003e8dc0) Data frame received for 3\nI0425 21:22:42.652883 274 log.go:172] (0xc0005f7d60) (3) Data frame handling\nI0425 21:22:42.652911 274 log.go:172] (0xc0005f7d60) (3) Data frame sent\nI0425 21:22:42.653099 274 log.go:172] (0xc0003e8dc0) Data frame received for 3\nI0425 21:22:42.653318 274 log.go:172] (0xc0005f7d60) (3) Data frame handling\nI0425 21:22:42.653613 274 log.go:172] (0xc0003e8dc0) Data frame received for 5\nI0425 21:22:42.653626 274 log.go:172] (0xc000b1c000) (5) Data frame handling\nI0425 21:22:42.655492 274 log.go:172] (0xc0003e8dc0) Data frame received for 1\nI0425 21:22:42.655513 274 log.go:172] (0xc0005f7b80) (1) Data frame handling\nI0425 21:22:42.655522 274 log.go:172] (0xc0005f7b80) (1) Data frame sent\nI0425 21:22:42.655622 274 log.go:172] (0xc0003e8dc0) (0xc0005f7b80) Stream removed, broadcasting: 1\nI0425 21:22:42.655722 274 log.go:172] (0xc0003e8dc0) Go away received\nI0425 21:22:42.655854 274 log.go:172] (0xc0003e8dc0) (0xc0005f7b80) Stream removed, broadcasting: 1\nI0425 21:22:42.655867 274 log.go:172] (0xc0003e8dc0) (0xc0005f7d60) Stream removed, broadcasting: 3\nI0425 21:22:42.655871 274 log.go:172] (0xc0003e8dc0) (0xc000b1c000) Stream removed, broadcasting: 5\n" Apr 25 21:22:42.660: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 25 21:22:42.660: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Apr 25 21:22:52.692: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 25 21:23:02.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-564 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 25 21:23:02.958: INFO: stderr: "I0425 21:23:02.850289 296 log.go:172] (0xc000b09340) (0xc00064fe00) Create stream\nI0425 21:23:02.850342 296 log.go:172] (0xc000b09340) (0xc00064fe00) Stream added, broadcasting: 1\nI0425 21:23:02.853978 296 log.go:172] (0xc000b09340) Reply frame received for 1\nI0425 21:23:02.854038 296 log.go:172] (0xc000b09340) (0xc000afe3c0) Create stream\nI0425 21:23:02.854058 296 log.go:172] (0xc000b09340) (0xc000afe3c0) Stream added, broadcasting: 3\nI0425 21:23:02.855358 296 log.go:172] (0xc000b09340) Reply frame received for 3\nI0425 21:23:02.855396 296 log.go:172] (0xc000b09340) (0xc000b261e0) Create stream\nI0425 21:23:02.855407 296 log.go:172] (0xc000b09340) (0xc000b261e0) Stream added, broadcasting: 5\nI0425 21:23:02.856347 296 log.go:172] (0xc000b09340) Reply frame received for 5\nI0425 21:23:02.951090 296 log.go:172] (0xc000b09340) Data frame received for 3\nI0425 21:23:02.951148 296 log.go:172] (0xc000afe3c0) (3) Data frame handling\nI0425 21:23:02.951166 296 log.go:172] (0xc000afe3c0) (3) Data frame sent\nI0425 21:23:02.951210 296 log.go:172] (0xc000b09340) Data frame received for 5\nI0425 21:23:02.951230 296 log.go:172] (0xc000b261e0) (5) Data frame handling\nI0425 21:23:02.951247 296 log.go:172] (0xc000b261e0) (5) Data frame sent\nI0425 21:23:02.951265 296 log.go:172] (0xc000b09340) Data frame received for 5\nI0425 21:23:02.951294 296 log.go:172] (0xc000b261e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0425 21:23:02.951390 296 log.go:172] (0xc000b09340) Data frame received for 3\nI0425 21:23:02.951419 296 log.go:172] (0xc000afe3c0) (3) Data frame handling\nI0425 21:23:02.953030 296 log.go:172] (0xc000b09340) Data frame received for 1\nI0425 21:23:02.953243 296 log.go:172] (0xc00064fe00) (1) Data frame handling\nI0425 21:23:02.953291 296 log.go:172] (0xc00064fe00) (1) Data frame sent\nI0425 21:23:02.953352 296 log.go:172] (0xc000b09340) (0xc00064fe00) Stream removed, broadcasting: 1\nI0425 21:23:02.953394 296 log.go:172] (0xc000b09340) Go away received\nI0425 21:23:02.953656 296 log.go:172] (0xc000b09340) (0xc00064fe00) Stream removed, broadcasting: 1\nI0425 21:23:02.953671 296 log.go:172] (0xc000b09340) (0xc000afe3c0) Stream removed, broadcasting: 3\nI0425 21:23:02.953677 296 log.go:172] (0xc000b09340) (0xc000b261e0) Stream removed, broadcasting: 5\n" Apr 25 21:23:02.958: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 25 21:23:02.958: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 25 21:23:13.001: INFO: Waiting for StatefulSet statefulset-564/ss2 to complete update Apr 25 21:23:13.001: INFO: Waiting for Pod statefulset-564/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 25 21:23:13.001: INFO: Waiting for Pod statefulset-564/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Apr 25 21:23:23.009: INFO: Waiting for StatefulSet statefulset-564/ss2 to complete update STEP: Rolling back to a previous revision Apr 25 21:23:33.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-564 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 25 21:23:33.282: INFO: stderr: "I0425 21:23:33.149602 316 log.go:172] (0xc000920630) (0xc00022d540) Create stream\nI0425 21:23:33.149655 316 log.go:172] (0xc000920630) (0xc00022d540) Stream added, broadcasting: 1\nI0425 21:23:33.152100 316 log.go:172] (0xc000920630) Reply frame received for 1\nI0425 21:23:33.152139 316 log.go:172] (0xc000920630) (0xc00098a000) Create stream\nI0425 21:23:33.152151 316 log.go:172] (0xc000920630) (0xc00098a000) Stream added, broadcasting: 3\nI0425 21:23:33.153321 316 log.go:172] (0xc000920630) Reply frame received for 3\nI0425 21:23:33.153365 316 log.go:172] (0xc000920630) (0xc0008dc000) Create stream\nI0425 21:23:33.153378 316 log.go:172] (0xc000920630) (0xc0008dc000) Stream added, broadcasting: 5\nI0425 21:23:33.154381 316 log.go:172] (0xc000920630) Reply frame received for 5\nI0425 21:23:33.247524 316 log.go:172] (0xc000920630) Data frame received for 5\nI0425 21:23:33.247546 316 log.go:172] (0xc0008dc000) (5) Data frame handling\nI0425 21:23:33.247558 316 log.go:172] (0xc0008dc000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0425 21:23:33.273688 316 log.go:172] (0xc000920630) Data frame received for 5\nI0425 21:23:33.273734 316 log.go:172] (0xc0008dc000) (5) Data frame handling\nI0425 21:23:33.273765 316 log.go:172] (0xc000920630) Data frame received for 3\nI0425 21:23:33.273785 316 log.go:172] (0xc00098a000) (3) Data frame handling\nI0425 21:23:33.273815 316 log.go:172] (0xc00098a000) (3) Data frame sent\nI0425 21:23:33.273844 316 log.go:172] (0xc000920630) Data frame received for 3\nI0425 21:23:33.273864 316 log.go:172] (0xc00098a000) (3) Data frame handling\nI0425 21:23:33.275750 316 log.go:172] (0xc000920630) Data frame received for 1\nI0425 21:23:33.275774 316 log.go:172] (0xc00022d540) (1) Data frame handling\nI0425 21:23:33.275792 316 log.go:172] (0xc00022d540) (1) Data frame sent\nI0425 21:23:33.275905 316 log.go:172] (0xc000920630) (0xc00022d540) Stream removed, broadcasting: 1\nI0425 21:23:33.275998 316 log.go:172] (0xc000920630) Go away received\nI0425 21:23:33.276292 316 log.go:172] (0xc000920630) (0xc00022d540) Stream removed, broadcasting: 1\nI0425 21:23:33.276318 316 log.go:172] (0xc000920630) (0xc00098a000) Stream removed, broadcasting: 3\nI0425 21:23:33.276329 316 log.go:172] (0xc000920630) (0xc0008dc000) Stream removed, broadcasting: 5\n" Apr 25 21:23:33.282: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 25 21:23:33.282: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 25 21:23:43.312: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 25 21:23:53.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-564 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 25 21:23:53.574: INFO: stderr: "I0425 21:23:53.470081 338 log.go:172] (0xc0002aa0b0) (0xc00068c000) Create stream\nI0425 21:23:53.470139 338 log.go:172] (0xc0002aa0b0) (0xc00068c000) Stream added, broadcasting: 1\nI0425 21:23:53.472823 338 log.go:172] (0xc0002aa0b0) Reply frame received for 1\nI0425 21:23:53.472866 338 log.go:172] (0xc0002aa0b0) (0xc000768000) Create stream\nI0425 21:23:53.472878 338 log.go:172] (0xc0002aa0b0) (0xc000768000) Stream added, broadcasting: 3\nI0425 21:23:53.473980 338 log.go:172] (0xc0002aa0b0) Reply frame received for 3\nI0425 21:23:53.474014 338 log.go:172] (0xc0002aa0b0) (0xc00068c140) Create stream\nI0425 21:23:53.474035 338 log.go:172] (0xc0002aa0b0) (0xc00068c140) Stream added, broadcasting: 5\nI0425 21:23:53.474982 338 log.go:172] (0xc0002aa0b0) Reply frame received for 5\nI0425 21:23:53.566233 338 log.go:172] (0xc0002aa0b0) Data frame received for 5\nI0425 21:23:53.566286 338 log.go:172] (0xc00068c140) (5) Data frame handling\nI0425 21:23:53.566302 338 log.go:172] (0xc00068c140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0425 21:23:53.566325 338 log.go:172] (0xc0002aa0b0) Data frame received for 3\nI0425 21:23:53.566342 338 log.go:172] (0xc000768000) (3) Data frame handling\nI0425 21:23:53.566354 338 log.go:172] (0xc000768000) (3) Data frame sent\nI0425 21:23:53.566365 338 log.go:172] (0xc0002aa0b0) Data frame received for 3\nI0425 21:23:53.566375 338 log.go:172] (0xc000768000) (3) Data frame handling\nI0425 21:23:53.566552 338 log.go:172] (0xc0002aa0b0) Data frame received for 5\nI0425 21:23:53.566588 338 log.go:172] (0xc00068c140) (5) Data frame handling\nI0425 21:23:53.568186 338 log.go:172] (0xc0002aa0b0) Data frame received for 1\nI0425 21:23:53.568227 338 log.go:172] (0xc00068c000) (1) Data frame handling\nI0425 21:23:53.568265 338 log.go:172] (0xc00068c000) (1) Data frame sent\nI0425 21:23:53.568297 338 log.go:172] (0xc0002aa0b0) (0xc00068c000) Stream removed, broadcasting: 1\nI0425 21:23:53.568551 338 log.go:172] (0xc0002aa0b0) Go away received\nI0425 21:23:53.568717 338 log.go:172] (0xc0002aa0b0) (0xc00068c000) Stream removed, broadcasting: 1\nI0425 21:23:53.568747 338 log.go:172] (0xc0002aa0b0) (0xc000768000) Stream removed, broadcasting: 3\nI0425 21:23:53.568762 338 log.go:172] (0xc0002aa0b0) (0xc00068c140) Stream removed, broadcasting: 5\n" Apr 25 21:23:53.574: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 25 21:23:53.574: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 25 21:24:13.595: INFO: Waiting for StatefulSet statefulset-564/ss2 to complete update Apr 25 21:24:13.595: INFO: Waiting for Pod statefulset-564/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 25 21:24:23.603: INFO: Deleting all statefulset in ns statefulset-564 Apr 25 21:24:23.606: INFO: Scaling statefulset ss2 to 0 Apr 25 21:24:43.622: INFO: Waiting for statefulset status.replicas updated to 0 Apr 25 21:24:43.629: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:24:43.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-564" for this suite. • [SLOW TEST:141.359 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":49,"skipped":663,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:24:43.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 25 21:24:43.977: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f931697-ea81-4b9c-8a08-8ee72f096b49" in namespace "downward-api-660" to be "success or failure" Apr 25 21:24:43.996: INFO: Pod "downwardapi-volume-0f931697-ea81-4b9c-8a08-8ee72f096b49": Phase="Pending", Reason="", readiness=false. Elapsed: 19.758452ms Apr 25 21:24:46.057: INFO: Pod "downwardapi-volume-0f931697-ea81-4b9c-8a08-8ee72f096b49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080665352s Apr 25 21:24:48.062: INFO: Pod "downwardapi-volume-0f931697-ea81-4b9c-8a08-8ee72f096b49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084950521s STEP: Saw pod success Apr 25 21:24:48.062: INFO: Pod "downwardapi-volume-0f931697-ea81-4b9c-8a08-8ee72f096b49" satisfied condition "success or failure" Apr 25 21:24:48.065: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0f931697-ea81-4b9c-8a08-8ee72f096b49 container client-container: STEP: delete the pod Apr 25 21:24:48.098: INFO: Waiting for pod downwardapi-volume-0f931697-ea81-4b9c-8a08-8ee72f096b49 to disappear Apr 25 21:24:48.102: INFO: Pod downwardapi-volume-0f931697-ea81-4b9c-8a08-8ee72f096b49 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:24:48.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-660" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":665,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:24:48.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:24:52.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4007" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":688,"failed":0} SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:24:52.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-2af97187-8d7c-4adc-9487-826f51d43e3e STEP: Creating a pod to test consume secrets Apr 25 21:24:52.363: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1eba2475-bf07-4e81-bf85-2ab7eb93696b" in namespace "projected-5972" to be "success or failure" Apr 25 21:24:52.367: INFO: Pod "pod-projected-secrets-1eba2475-bf07-4e81-bf85-2ab7eb93696b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.358899ms Apr 25 21:24:54.374: INFO: Pod "pod-projected-secrets-1eba2475-bf07-4e81-bf85-2ab7eb93696b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010848308s Apr 25 21:24:56.378: INFO: Pod "pod-projected-secrets-1eba2475-bf07-4e81-bf85-2ab7eb93696b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014964349s STEP: Saw pod success Apr 25 21:24:56.378: INFO: Pod "pod-projected-secrets-1eba2475-bf07-4e81-bf85-2ab7eb93696b" satisfied condition "success or failure" Apr 25 21:24:56.381: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-1eba2475-bf07-4e81-bf85-2ab7eb93696b container projected-secret-volume-test: STEP: delete the pod Apr 25 21:24:56.430: INFO: Waiting for pod pod-projected-secrets-1eba2475-bf07-4e81-bf85-2ab7eb93696b to disappear Apr 25 21:24:56.433: INFO: Pod pod-projected-secrets-1eba2475-bf07-4e81-bf85-2ab7eb93696b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:24:56.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5972" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":691,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:24:56.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-2559 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 25 21:24:56.506: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 25 21:25:20.671: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.196:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2559 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 21:25:20.671: INFO: >>> kubeConfig: /root/.kube/config I0425 21:25:20.729445 6 log.go:172] (0xc001ae6000) (0xc001a90b40) Create stream I0425 21:25:20.729478 6 log.go:172] (0xc001ae6000) (0xc001a90b40) Stream added, broadcasting: 1 I0425 21:25:20.731312 6 log.go:172] (0xc001ae6000) Reply frame received for 1 I0425 21:25:20.731366 6 log.go:172] (0xc001ae6000) (0xc001a90be0) Create stream I0425 21:25:20.731380 6 log.go:172] (0xc001ae6000) (0xc001a90be0) Stream added, broadcasting: 3 I0425 21:25:20.732120 6 log.go:172] (0xc001ae6000) Reply frame received for 3 I0425 21:25:20.732153 6 log.go:172] (0xc001ae6000) (0xc001d34280) Create stream I0425 21:25:20.732164 6 log.go:172] (0xc001ae6000) (0xc001d34280) Stream added, broadcasting: 5 I0425 21:25:20.732769 6 log.go:172] (0xc001ae6000) Reply frame received for 5 I0425 21:25:20.822740 6 log.go:172] (0xc001ae6000) Data frame received for 5 I0425 21:25:20.822781 6 log.go:172] (0xc001d34280) (5) Data frame handling I0425 21:25:20.822807 6 log.go:172] (0xc001ae6000) Data frame received for 3 I0425 21:25:20.822831 6 log.go:172] (0xc001a90be0) (3) Data frame handling I0425 21:25:20.822847 6 log.go:172] (0xc001a90be0) (3) Data frame sent I0425 21:25:20.822943 6 log.go:172] (0xc001ae6000) Data frame received for 3 I0425 21:25:20.822968 6 log.go:172] (0xc001a90be0) (3) Data frame handling I0425 21:25:20.824293 6 log.go:172] (0xc001ae6000) Data frame received for 1 I0425 21:25:20.824314 6 log.go:172] (0xc001a90b40) (1) Data frame handling I0425 21:25:20.824360 6 log.go:172] (0xc001a90b40) (1) Data frame sent I0425 21:25:20.824493 6 log.go:172] (0xc001ae6000) (0xc001a90b40) Stream removed, broadcasting: 1 I0425 21:25:20.824550 6 log.go:172] (0xc001ae6000) Go away received I0425 21:25:20.824943 6 log.go:172] (0xc001ae6000) (0xc001a90b40) Stream removed, broadcasting: 1 I0425 21:25:20.824974 6 log.go:172] (0xc001ae6000) (0xc001a90be0) Stream removed, broadcasting: 3 I0425 21:25:20.824991 6 log.go:172] (0xc001ae6000) (0xc001d34280) Stream removed, broadcasting: 5 Apr 25 21:25:20.825: INFO: Found all expected endpoints: [netserver-0] Apr 25 21:25:20.828: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.100:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2559 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 21:25:20.828: INFO: >>> kubeConfig: /root/.kube/config I0425 21:25:20.860092 6 log.go:172] (0xc001a58370) (0xc001a005a0) Create stream I0425 21:25:20.860129 6 log.go:172] (0xc001a58370) (0xc001a005a0) Stream added, broadcasting: 1 I0425 21:25:20.862583 6 log.go:172] (0xc001a58370) Reply frame received for 1 I0425 21:25:20.862621 6 log.go:172] (0xc001a58370) (0xc002240140) Create stream I0425 21:25:20.862631 6 log.go:172] (0xc001a58370) (0xc002240140) Stream added, broadcasting: 3 I0425 21:25:20.863530 6 log.go:172] (0xc001a58370) Reply frame received for 3 I0425 21:25:20.863571 6 log.go:172] (0xc001a58370) (0xc001a00780) Create stream I0425 21:25:20.863585 6 log.go:172] (0xc001a58370) (0xc001a00780) Stream added, broadcasting: 5 I0425 21:25:20.864589 6 log.go:172] (0xc001a58370) Reply frame received for 5 I0425 21:25:20.939688 6 log.go:172] (0xc001a58370) Data frame received for 3 I0425 21:25:20.939718 6 log.go:172] (0xc002240140) (3) Data frame handling I0425 21:25:20.939756 6 log.go:172] (0xc002240140) (3) Data frame sent I0425 21:25:20.939793 6 log.go:172] (0xc001a58370) Data frame received for 3 I0425 21:25:20.939814 6 log.go:172] (0xc002240140) (3) Data frame handling I0425 21:25:20.940076 6 log.go:172] (0xc001a58370) Data frame received for 5 I0425 21:25:20.940167 6 log.go:172] (0xc001a00780) (5) Data frame handling I0425 21:25:20.941814 6 log.go:172] (0xc001a58370) Data frame received for 1 I0425 21:25:20.941848 6 log.go:172] (0xc001a005a0) (1) Data frame handling I0425 21:25:20.941880 6 log.go:172] (0xc001a005a0) (1) Data frame sent I0425 21:25:20.941904 6 log.go:172] (0xc001a58370) (0xc001a005a0) Stream removed, broadcasting: 1 I0425 21:25:20.941984 6 log.go:172] (0xc001a58370) Go away received I0425 21:25:20.942039 6 log.go:172] (0xc001a58370) (0xc001a005a0) Stream removed, broadcasting: 1 I0425 21:25:20.942082 6 log.go:172] (0xc001a58370) (0xc002240140) Stream removed, broadcasting: 3 I0425 21:25:20.942107 6 log.go:172] (0xc001a58370) (0xc001a00780) Stream removed, broadcasting: 5 Apr 25 21:25:20.942: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:25:20.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2559" for this suite. • [SLOW TEST:24.485 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":701,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:25:20.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:25:25.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-982" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":750,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:25:25.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:25:41.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-199" for this suite. • [SLOW TEST:16.324 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":55,"skipped":764,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:25:41.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 25 21:25:41.465: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51989bc6-9d90-4e3a-8bed-e363de717f1b" in namespace "projected-4128" to be "success or failure" Apr 25 21:25:41.506: INFO: Pod "downwardapi-volume-51989bc6-9d90-4e3a-8bed-e363de717f1b": Phase="Pending", Reason="", readiness=false. Elapsed: 41.315572ms Apr 25 21:25:43.519: INFO: Pod "downwardapi-volume-51989bc6-9d90-4e3a-8bed-e363de717f1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05419532s Apr 25 21:25:45.523: INFO: Pod "downwardapi-volume-51989bc6-9d90-4e3a-8bed-e363de717f1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058201227s STEP: Saw pod success Apr 25 21:25:45.523: INFO: Pod "downwardapi-volume-51989bc6-9d90-4e3a-8bed-e363de717f1b" satisfied condition "success or failure" Apr 25 21:25:45.527: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-51989bc6-9d90-4e3a-8bed-e363de717f1b container client-container: STEP: delete the pod Apr 25 21:25:45.586: INFO: Waiting for pod downwardapi-volume-51989bc6-9d90-4e3a-8bed-e363de717f1b to disappear Apr 25 21:25:45.596: INFO: Pod downwardapi-volume-51989bc6-9d90-4e3a-8bed-e363de717f1b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:25:45.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4128" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":56,"skipped":786,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:25:45.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 21:25:46.190: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 21:25:48.202: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446746, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446746, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446746, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446746, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 21:25:51.244: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:25:51.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9826" for this suite. STEP: Destroying namespace "webhook-9826-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.737 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":57,"skipped":817,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:25:51.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 25 21:25:56.140: INFO: Successfully updated pod "labelsupdate33072f59-dc84-4e9d-b99e-eec2c403df14" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:25:58.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9766" for this suite. • [SLOW TEST:6.824 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":846,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:25:58.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-3550 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-3550 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3550 Apr 25 21:25:58.249: INFO: Found 0 stateful pods, waiting for 1 Apr 25 21:26:08.253: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 25 21:26:08.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3550 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 25 21:26:08.492: INFO: stderr: "I0425 21:26:08.391659 359 log.go:172] (0xc0000f7290) (0xc0006e4000) Create stream\nI0425 21:26:08.391734 359 log.go:172] (0xc0000f7290) (0xc0006e4000) Stream added, broadcasting: 1\nI0425 21:26:08.394884 359 log.go:172] (0xc0000f7290) Reply frame received for 1\nI0425 21:26:08.394946 359 log.go:172] (0xc0000f7290) (0xc00067fb80) Create stream\nI0425 21:26:08.394973 359 log.go:172] (0xc0000f7290) (0xc00067fb80) Stream added, broadcasting: 3\nI0425 21:26:08.395973 359 log.go:172] (0xc0000f7290) Reply frame received for 3\nI0425 21:26:08.396025 359 log.go:172] (0xc0000f7290) (0xc00052a000) Create stream\nI0425 21:26:08.396045 359 log.go:172] (0xc0000f7290) (0xc00052a000) Stream added, broadcasting: 5\nI0425 21:26:08.397006 359 log.go:172] (0xc0000f7290) Reply frame received for 5\nI0425 21:26:08.451906 359 log.go:172] (0xc0000f7290) Data frame received for 5\nI0425 21:26:08.451934 359 log.go:172] (0xc00052a000) (5) Data frame handling\nI0425 21:26:08.451950 359 log.go:172] (0xc00052a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0425 21:26:08.483892 359 log.go:172] (0xc0000f7290) Data frame received for 3\nI0425 21:26:08.483940 359 log.go:172] (0xc00067fb80) (3) Data frame handling\nI0425 21:26:08.483958 359 log.go:172] (0xc00067fb80) (3) Data frame sent\nI0425 21:26:08.483966 359 log.go:172] (0xc0000f7290) Data frame received for 3\nI0425 21:26:08.483973 359 log.go:172] (0xc00067fb80) (3) Data frame handling\nI0425 21:26:08.484021 359 log.go:172] (0xc0000f7290) Data frame received for 5\nI0425 21:26:08.484040 359 log.go:172] (0xc00052a000) (5) Data frame handling\nI0425 21:26:08.486257 359 log.go:172] (0xc0000f7290) Data frame received for 1\nI0425 21:26:08.486335 359 log.go:172] (0xc0006e4000) (1) Data frame handling\nI0425 21:26:08.486373 359 log.go:172] (0xc0006e4000) (1) Data frame sent\nI0425 21:26:08.486393 359 log.go:172] (0xc0000f7290) (0xc0006e4000) Stream removed, broadcasting: 1\nI0425 21:26:08.486417 359 log.go:172] (0xc0000f7290) Go away received\nI0425 21:26:08.486929 359 log.go:172] (0xc0000f7290) (0xc0006e4000) Stream removed, broadcasting: 1\nI0425 21:26:08.486948 359 log.go:172] (0xc0000f7290) (0xc00067fb80) Stream removed, broadcasting: 3\nI0425 21:26:08.486958 359 log.go:172] (0xc0000f7290) (0xc00052a000) Stream removed, broadcasting: 5\n" Apr 25 21:26:08.492: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 25 21:26:08.492: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 25 21:26:08.495: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 25 21:26:18.500: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 25 21:26:18.500: INFO: Waiting for statefulset status.replicas updated to 0 Apr 25 21:26:18.518: INFO: POD NODE PHASE GRACE CONDITIONS Apr 25 21:26:18.518: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:08 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC }] Apr 25 21:26:18.518: INFO: Apr 25 21:26:18.518: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 25 21:26:19.523: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991754762s Apr 25 21:26:20.604: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986801317s Apr 25 21:26:21.634: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.905802762s Apr 25 21:26:22.669: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.87629763s Apr 25 21:26:23.674: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.84045101s Apr 25 21:26:24.679: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.835582888s Apr 25 21:26:25.700: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.830697773s Apr 25 21:26:26.705: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.809938519s Apr 25 21:26:27.711: INFO: Verifying statefulset ss doesn't scale past 3 for another 804.569806ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3550 Apr 25 21:26:28.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3550 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 25 21:26:28.948: INFO: stderr: "I0425 21:26:28.844997 379 log.go:172] (0xc0009e4000) (0xc00094a000) Create stream\nI0425 21:26:28.845058 379 log.go:172] (0xc0009e4000) (0xc00094a000) Stream added, broadcasting: 1\nI0425 21:26:28.847139 379 log.go:172] (0xc0009e4000) Reply frame received for 1\nI0425 21:26:28.847179 379 log.go:172] (0xc0009e4000) (0xc00094a0a0) Create stream\nI0425 21:26:28.847188 379 log.go:172] (0xc0009e4000) (0xc00094a0a0) Stream added, broadcasting: 3\nI0425 21:26:28.848115 379 log.go:172] (0xc0009e4000) Reply frame received for 3\nI0425 21:26:28.848162 379 log.go:172] (0xc0009e4000) (0xc0004755e0) Create stream\nI0425 21:26:28.848176 379 log.go:172] (0xc0009e4000) (0xc0004755e0) Stream added, broadcasting: 5\nI0425 21:26:28.849396 379 log.go:172] (0xc0009e4000) Reply frame received for 5\nI0425 21:26:28.941587 379 log.go:172] (0xc0009e4000) Data frame received for 3\nI0425 21:26:28.941621 379 log.go:172] (0xc00094a0a0) (3) Data frame handling\nI0425 21:26:28.941636 379 log.go:172] (0xc00094a0a0) (3) Data frame sent\nI0425 21:26:28.941648 379 log.go:172] (0xc0009e4000) Data frame received for 3\nI0425 21:26:28.941658 379 log.go:172] (0xc00094a0a0) (3) Data frame handling\nI0425 21:26:28.941671 379 log.go:172] (0xc0009e4000) Data frame received for 5\nI0425 21:26:28.941680 379 log.go:172] (0xc0004755e0) (5) Data frame handling\nI0425 21:26:28.941691 379 log.go:172] (0xc0004755e0) (5) Data frame sent\nI0425 21:26:28.941700 379 log.go:172] (0xc0009e4000) Data frame received for 5\nI0425 21:26:28.941709 379 log.go:172] (0xc0004755e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0425 21:26:28.943088 379 log.go:172] (0xc0009e4000) Data frame received for 1\nI0425 21:26:28.943115 379 log.go:172] (0xc00094a000) (1) Data frame handling\nI0425 21:26:28.943148 379 log.go:172] (0xc00094a000) (1) Data frame sent\nI0425 21:26:28.943177 379 log.go:172] (0xc0009e4000) (0xc00094a000) Stream removed, broadcasting: 1\nI0425 21:26:28.943198 379 log.go:172] (0xc0009e4000) Go away received\nI0425 21:26:28.943437 379 log.go:172] (0xc0009e4000) (0xc00094a000) Stream removed, broadcasting: 1\nI0425 21:26:28.943464 379 log.go:172] (0xc0009e4000) (0xc00094a0a0) Stream removed, broadcasting: 3\nI0425 21:26:28.943479 379 log.go:172] (0xc0009e4000) (0xc0004755e0) Stream removed, broadcasting: 5\n" Apr 25 21:26:28.948: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 25 21:26:28.948: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 25 21:26:28.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3550 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 25 21:26:29.165: INFO: stderr: "I0425 21:26:29.098972 399 log.go:172] (0xc0000f4fd0) (0xc000976000) Create stream\nI0425 21:26:29.099039 399 log.go:172] (0xc0000f4fd0) (0xc000976000) Stream added, broadcasting: 1\nI0425 21:26:29.101429 399 log.go:172] (0xc0000f4fd0) Reply frame received for 1\nI0425 21:26:29.101480 399 log.go:172] (0xc0000f4fd0) (0xc0005fdae0) Create stream\nI0425 21:26:29.101498 399 log.go:172] (0xc0000f4fd0) (0xc0005fdae0) Stream added, broadcasting: 3\nI0425 21:26:29.102278 399 log.go:172] (0xc0000f4fd0) Reply frame received for 3\nI0425 21:26:29.102319 399 log.go:172] (0xc0000f4fd0) (0xc000512000) Create stream\nI0425 21:26:29.102336 399 log.go:172] (0xc0000f4fd0) (0xc000512000) Stream added, broadcasting: 5\nI0425 21:26:29.103125 399 log.go:172] (0xc0000f4fd0) Reply frame received for 5\nI0425 21:26:29.156948 399 log.go:172] (0xc0000f4fd0) Data frame received for 3\nI0425 21:26:29.156983 399 log.go:172] (0xc0005fdae0) (3) Data frame handling\nI0425 21:26:29.156998 399 log.go:172] (0xc0005fdae0) (3) Data frame sent\nI0425 21:26:29.157019 399 log.go:172] (0xc0000f4fd0) Data frame received for 3\nI0425 21:26:29.157031 399 log.go:172] (0xc0005fdae0) (3) Data frame handling\nI0425 21:26:29.157061 399 log.go:172] (0xc0000f4fd0) Data frame received for 5\nI0425 21:26:29.157094 399 log.go:172] (0xc000512000) (5) Data frame handling\nI0425 21:26:29.157277 399 log.go:172] (0xc000512000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0425 21:26:29.157308 399 log.go:172] (0xc0000f4fd0) Data frame received for 5\nI0425 21:26:29.157326 399 log.go:172] (0xc000512000) (5) Data frame handling\nI0425 21:26:29.159322 399 log.go:172] (0xc0000f4fd0) Data frame received for 1\nI0425 21:26:29.159343 399 log.go:172] (0xc000976000) (1) Data frame handling\nI0425 21:26:29.159373 399 log.go:172] (0xc000976000) (1) Data frame sent\nI0425 21:26:29.159396 399 log.go:172] (0xc0000f4fd0) (0xc000976000) Stream removed, broadcasting: 1\nI0425 21:26:29.159467 399 log.go:172] (0xc0000f4fd0) Go away received\nI0425 21:26:29.159809 399 log.go:172] (0xc0000f4fd0) (0xc000976000) Stream removed, broadcasting: 1\nI0425 21:26:29.159835 399 log.go:172] (0xc0000f4fd0) (0xc0005fdae0) Stream removed, broadcasting: 3\nI0425 21:26:29.159849 399 log.go:172] (0xc0000f4fd0) (0xc000512000) Stream removed, broadcasting: 5\n" Apr 25 21:26:29.166: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 25 21:26:29.166: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 25 21:26:29.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3550 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 25 21:26:29.392: INFO: stderr: "I0425 21:26:29.297907 420 log.go:172] (0xc0008f20b0) (0xc0009661e0) Create stream\nI0425 21:26:29.297972 420 log.go:172] (0xc0008f20b0) (0xc0009661e0) Stream added, broadcasting: 1\nI0425 21:26:29.310182 420 log.go:172] (0xc0008f20b0) Reply frame received for 1\nI0425 21:26:29.310237 420 log.go:172] (0xc0008f20b0) (0xc0004059a0) Create stream\nI0425 21:26:29.310248 420 log.go:172] (0xc0008f20b0) (0xc0004059a0) Stream added, broadcasting: 3\nI0425 21:26:29.311825 420 log.go:172] (0xc0008f20b0) Reply frame received for 3\nI0425 21:26:29.311857 420 log.go:172] (0xc0008f20b0) (0xc000405a40) Create stream\nI0425 21:26:29.311870 420 log.go:172] (0xc0008f20b0) (0xc000405a40) Stream added, broadcasting: 5\nI0425 21:26:29.313035 420 log.go:172] (0xc0008f20b0) Reply frame received for 5\nI0425 21:26:29.384400 420 log.go:172] (0xc0008f20b0) Data frame received for 3\nI0425 21:26:29.384448 420 log.go:172] (0xc0004059a0) (3) Data frame handling\nI0425 21:26:29.384471 420 log.go:172] (0xc0004059a0) (3) Data frame sent\nI0425 21:26:29.384510 420 log.go:172] (0xc0008f20b0) Data frame received for 5\nI0425 21:26:29.384530 420 log.go:172] (0xc000405a40) (5) Data frame handling\nI0425 21:26:29.384546 420 log.go:172] (0xc000405a40) (5) Data frame sent\nI0425 21:26:29.384559 420 log.go:172] (0xc0008f20b0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0425 21:26:29.384570 420 log.go:172] (0xc000405a40) (5) Data frame handling\nI0425 21:26:29.384615 420 log.go:172] (0xc0008f20b0) Data frame received for 3\nI0425 21:26:29.384643 420 log.go:172] (0xc0004059a0) (3) Data frame handling\nI0425 21:26:29.386507 420 log.go:172] (0xc0008f20b0) Data frame received for 1\nI0425 21:26:29.386559 420 log.go:172] (0xc0009661e0) (1) Data frame handling\nI0425 21:26:29.386593 420 log.go:172] (0xc0009661e0) (1) Data frame sent\nI0425 21:26:29.386632 420 log.go:172] (0xc0008f20b0) (0xc0009661e0) Stream removed, broadcasting: 1\nI0425 21:26:29.386900 420 log.go:172] (0xc0008f20b0) Go away received\nI0425 21:26:29.387144 420 log.go:172] (0xc0008f20b0) (0xc0009661e0) Stream removed, broadcasting: 1\nI0425 21:26:29.387172 420 log.go:172] (0xc0008f20b0) (0xc0004059a0) Stream removed, broadcasting: 3\nI0425 21:26:29.387185 420 log.go:172] (0xc0008f20b0) (0xc000405a40) Stream removed, broadcasting: 5\n" Apr 25 21:26:29.392: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 25 21:26:29.392: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 25 21:26:29.396: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 25 21:26:39.401: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 25 21:26:39.401: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 25 21:26:39.401: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 25 21:26:39.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3550 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 25 21:26:39.628: INFO: stderr: "I0425 21:26:39.528822 438 log.go:172] (0xc000ac80b0) (0xc0004554a0) Create stream\nI0425 21:26:39.528877 438 log.go:172] (0xc000ac80b0) (0xc0004554a0) Stream added, broadcasting: 1\nI0425 21:26:39.531477 438 log.go:172] (0xc000ac80b0) Reply frame received for 1\nI0425 21:26:39.531520 438 log.go:172] (0xc000ac80b0) (0xc000966000) Create stream\nI0425 21:26:39.531535 438 log.go:172] (0xc000ac80b0) (0xc000966000) Stream added, broadcasting: 3\nI0425 21:26:39.532739 438 log.go:172] (0xc000ac80b0) Reply frame received for 3\nI0425 21:26:39.532775 438 log.go:172] (0xc000ac80b0) (0xc0009660a0) Create stream\nI0425 21:26:39.532788 438 log.go:172] (0xc000ac80b0) (0xc0009660a0) Stream added, broadcasting: 5\nI0425 21:26:39.534061 438 log.go:172] (0xc000ac80b0) Reply frame received for 5\nI0425 21:26:39.623341 438 log.go:172] (0xc000ac80b0) Data frame received for 3\nI0425 21:26:39.623373 438 log.go:172] (0xc000966000) (3) Data frame handling\nI0425 21:26:39.623381 438 log.go:172] (0xc000966000) (3) Data frame sent\nI0425 21:26:39.623387 438 log.go:172] (0xc000ac80b0) Data frame received for 3\nI0425 21:26:39.623400 438 log.go:172] (0xc000966000) (3) Data frame handling\nI0425 21:26:39.623421 438 log.go:172] (0xc000ac80b0) Data frame received for 5\nI0425 21:26:39.623440 438 log.go:172] (0xc0009660a0) (5) Data frame handling\nI0425 21:26:39.623448 438 log.go:172] (0xc0009660a0) (5) Data frame sent\nI0425 21:26:39.623454 438 log.go:172] (0xc000ac80b0) Data frame received for 5\nI0425 21:26:39.623458 438 log.go:172] (0xc0009660a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0425 21:26:39.624508 438 log.go:172] (0xc000ac80b0) Data frame received for 1\nI0425 21:26:39.624527 438 log.go:172] (0xc0004554a0) (1) Data frame handling\nI0425 21:26:39.624539 438 log.go:172] (0xc0004554a0) (1) Data frame sent\nI0425 21:26:39.624549 438 log.go:172] (0xc000ac80b0) (0xc0004554a0) Stream removed, broadcasting: 1\nI0425 21:26:39.624565 438 log.go:172] (0xc000ac80b0) Go away received\nI0425 21:26:39.624886 438 log.go:172] (0xc000ac80b0) (0xc0004554a0) Stream removed, broadcasting: 1\nI0425 21:26:39.624898 438 log.go:172] (0xc000ac80b0) (0xc000966000) Stream removed, broadcasting: 3\nI0425 21:26:39.624905 438 log.go:172] (0xc000ac80b0) (0xc0009660a0) Stream removed, broadcasting: 5\n" Apr 25 21:26:39.628: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 25 21:26:39.628: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 25 21:26:39.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3550 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 25 21:26:39.856: INFO: stderr: "I0425 21:26:39.759783 458 log.go:172] (0xc0000f4a50) (0xc000a80000) Create stream\nI0425 21:26:39.759843 458 log.go:172] (0xc0000f4a50) (0xc000a80000) Stream added, broadcasting: 1\nI0425 21:26:39.762716 458 log.go:172] (0xc0000f4a50) Reply frame received for 1\nI0425 21:26:39.762759 458 log.go:172] (0xc0000f4a50) (0xc00070bae0) Create stream\nI0425 21:26:39.762772 458 log.go:172] (0xc0000f4a50) (0xc00070bae0) Stream added, broadcasting: 3\nI0425 21:26:39.763854 458 log.go:172] (0xc0000f4a50) Reply frame received for 3\nI0425 21:26:39.763907 458 log.go:172] (0xc0000f4a50) (0xc000a800a0) Create stream\nI0425 21:26:39.763922 458 log.go:172] (0xc0000f4a50) (0xc000a800a0) Stream added, broadcasting: 5\nI0425 21:26:39.764906 458 log.go:172] (0xc0000f4a50) Reply frame received for 5\nI0425 21:26:39.822601 458 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0425 21:26:39.822630 458 log.go:172] (0xc000a800a0) (5) Data frame handling\nI0425 21:26:39.822648 458 log.go:172] (0xc000a800a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0425 21:26:39.848967 458 log.go:172] (0xc0000f4a50) Data frame received for 3\nI0425 21:26:39.848989 458 log.go:172] (0xc00070bae0) (3) Data frame handling\nI0425 21:26:39.849000 458 log.go:172] (0xc00070bae0) (3) Data frame sent\nI0425 21:26:39.849607 458 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0425 21:26:39.849638 458 log.go:172] (0xc000a800a0) (5) Data frame handling\nI0425 21:26:39.849681 458 log.go:172] (0xc0000f4a50) Data frame received for 3\nI0425 21:26:39.849699 458 log.go:172] (0xc00070bae0) (3) Data frame handling\nI0425 21:26:39.851650 458 log.go:172] (0xc0000f4a50) Data frame received for 1\nI0425 21:26:39.851690 458 log.go:172] (0xc000a80000) (1) Data frame handling\nI0425 21:26:39.851710 458 log.go:172] (0xc000a80000) (1) Data frame sent\nI0425 21:26:39.851739 458 log.go:172] (0xc0000f4a50) (0xc000a80000) Stream removed, broadcasting: 1\nI0425 21:26:39.851767 458 log.go:172] (0xc0000f4a50) Go away received\nI0425 21:26:39.852208 458 log.go:172] (0xc0000f4a50) (0xc000a80000) Stream removed, broadcasting: 1\nI0425 21:26:39.852232 458 log.go:172] (0xc0000f4a50) (0xc00070bae0) Stream removed, broadcasting: 3\nI0425 21:26:39.852252 458 log.go:172] (0xc0000f4a50) (0xc000a800a0) Stream removed, broadcasting: 5\n" Apr 25 21:26:39.856: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 25 21:26:39.856: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 25 21:26:39.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3550 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 25 21:26:40.078: INFO: stderr: "I0425 21:26:39.978755 479 log.go:172] (0xc000604dc0) (0xc0007ea000) Create stream\nI0425 21:26:39.978822 479 log.go:172] (0xc000604dc0) (0xc0007ea000) Stream added, broadcasting: 1\nI0425 21:26:39.981651 479 log.go:172] (0xc000604dc0) Reply frame received for 1\nI0425 21:26:39.981708 479 log.go:172] (0xc000604dc0) (0xc000547a40) Create stream\nI0425 21:26:39.981719 479 log.go:172] (0xc000604dc0) (0xc000547a40) Stream added, broadcasting: 3\nI0425 21:26:39.982693 479 log.go:172] (0xc000604dc0) Reply frame received for 3\nI0425 21:26:39.982742 479 log.go:172] (0xc000604dc0) (0xc0007ea0a0) Create stream\nI0425 21:26:39.982755 479 log.go:172] (0xc000604dc0) (0xc0007ea0a0) Stream added, broadcasting: 5\nI0425 21:26:39.983771 479 log.go:172] (0xc000604dc0) Reply frame received for 5\nI0425 21:26:40.036939 479 log.go:172] (0xc000604dc0) Data frame received for 5\nI0425 21:26:40.036980 479 log.go:172] (0xc0007ea0a0) (5) Data frame handling\nI0425 21:26:40.037008 479 log.go:172] (0xc0007ea0a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0425 21:26:40.071191 479 log.go:172] (0xc000604dc0) Data frame received for 5\nI0425 21:26:40.071234 479 log.go:172] (0xc0007ea0a0) (5) Data frame handling\nI0425 21:26:40.071258 479 log.go:172] (0xc000604dc0) Data frame received for 3\nI0425 21:26:40.071270 479 log.go:172] (0xc000547a40) (3) Data frame handling\nI0425 21:26:40.071284 479 log.go:172] (0xc000547a40) (3) Data frame sent\nI0425 21:26:40.071489 479 log.go:172] (0xc000604dc0) Data frame received for 3\nI0425 21:26:40.071515 479 log.go:172] (0xc000547a40) (3) Data frame handling\nI0425 21:26:40.072897 479 log.go:172] (0xc000604dc0) Data frame received for 1\nI0425 21:26:40.072912 479 log.go:172] (0xc0007ea000) (1) Data frame handling\nI0425 21:26:40.072921 479 log.go:172] (0xc0007ea000) (1) Data frame sent\nI0425 21:26:40.072933 479 log.go:172] (0xc000604dc0) (0xc0007ea000) Stream removed, broadcasting: 1\nI0425 21:26:40.072944 479 log.go:172] (0xc000604dc0) Go away received\nI0425 21:26:40.073553 479 log.go:172] (0xc000604dc0) (0xc0007ea000) Stream removed, broadcasting: 1\nI0425 21:26:40.073588 479 log.go:172] (0xc000604dc0) (0xc000547a40) Stream removed, broadcasting: 3\nI0425 21:26:40.073602 479 log.go:172] (0xc000604dc0) (0xc0007ea0a0) Stream removed, broadcasting: 5\n" Apr 25 21:26:40.078: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 25 21:26:40.078: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 25 21:26:40.078: INFO: Waiting for statefulset status.replicas updated to 0 Apr 25 21:26:40.082: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 25 21:26:50.090: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 25 21:26:50.091: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 25 21:26:50.091: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 25 21:26:50.106: INFO: POD NODE PHASE GRACE CONDITIONS Apr 25 21:26:50.106: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC }] Apr 25 21:26:50.106: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:50.106: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:50.106: INFO: Apr 25 21:26:50.106: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 25 21:26:51.132: INFO: POD NODE PHASE GRACE CONDITIONS Apr 25 21:26:51.132: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC }] Apr 25 21:26:51.132: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:51.132: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:51.132: INFO: Apr 25 21:26:51.132: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 25 21:26:52.138: INFO: POD NODE PHASE GRACE CONDITIONS Apr 25 21:26:52.138: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC }] Apr 25 21:26:52.138: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:52.138: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:52.138: INFO: Apr 25 21:26:52.138: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 25 21:26:53.142: INFO: POD NODE PHASE GRACE CONDITIONS Apr 25 21:26:53.142: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC }] Apr 25 21:26:53.142: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:53.142: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:53.142: INFO: Apr 25 21:26:53.142: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 25 21:26:54.147: INFO: POD NODE PHASE GRACE CONDITIONS Apr 25 21:26:54.147: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC }] Apr 25 21:26:54.147: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:54.147: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:54.147: INFO: Apr 25 21:26:54.147: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 25 21:26:55.159: INFO: POD NODE PHASE GRACE CONDITIONS Apr 25 21:26:55.159: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC }] Apr 25 21:26:55.159: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:55.159: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:55.159: INFO: Apr 25 21:26:55.159: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 25 21:26:56.165: INFO: POD NODE PHASE GRACE CONDITIONS Apr 25 21:26:56.165: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC }] Apr 25 21:26:56.165: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:56.165: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:56.165: INFO: Apr 25 21:26:56.165: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 25 21:26:57.184: INFO: POD NODE PHASE GRACE CONDITIONS Apr 25 21:26:57.184: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC }] Apr 25 21:26:57.184: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:57.184: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:57.184: INFO: Apr 25 21:26:57.184: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 25 21:26:58.189: INFO: POD NODE PHASE GRACE CONDITIONS Apr 25 21:26:58.189: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC }] Apr 25 21:26:58.189: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:58.189: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:58.189: INFO: Apr 25 21:26:58.189: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 25 21:26:59.197: INFO: POD NODE PHASE GRACE CONDITIONS Apr 25 21:26:59.197: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:25:58 +0000 UTC }] Apr 25 21:26:59.197: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:59.197: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:40 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-25 21:26:18 +0000 UTC }] Apr 25 21:26:59.197: INFO: Apr 25 21:26:59.197: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3550 Apr 25 21:27:00.201: INFO: Scaling statefulset ss to 0 Apr 25 21:27:00.209: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 25 21:27:00.211: INFO: Deleting all statefulset in ns statefulset-3550 Apr 25 21:27:00.214: INFO: Scaling statefulset ss to 0 Apr 25 21:27:00.221: INFO: Waiting for statefulset status.replicas updated to 0 Apr 25 21:27:00.223: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:27:00.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3550" for this suite. • [SLOW TEST:62.081 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":59,"skipped":853,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:27:00.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-5580/configmap-test-07c034a5-6068-4d16-b231-fd5dd0f17b45 STEP: Creating a pod to test consume configMaps Apr 25 21:27:00.342: INFO: Waiting up to 5m0s for pod "pod-configmaps-064c8f77-0dd5-43bc-bff1-2dbf314ef8aa" in namespace "configmap-5580" to be "success or failure" Apr 25 21:27:00.346: INFO: Pod "pod-configmaps-064c8f77-0dd5-43bc-bff1-2dbf314ef8aa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.928458ms Apr 25 21:27:02.349: INFO: Pod "pod-configmaps-064c8f77-0dd5-43bc-bff1-2dbf314ef8aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007381008s Apr 25 21:27:04.353: INFO: Pod "pod-configmaps-064c8f77-0dd5-43bc-bff1-2dbf314ef8aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011820667s STEP: Saw pod success Apr 25 21:27:04.354: INFO: Pod "pod-configmaps-064c8f77-0dd5-43bc-bff1-2dbf314ef8aa" satisfied condition "success or failure" Apr 25 21:27:04.356: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-064c8f77-0dd5-43bc-bff1-2dbf314ef8aa container env-test: STEP: delete the pod Apr 25 21:27:04.389: INFO: Waiting for pod pod-configmaps-064c8f77-0dd5-43bc-bff1-2dbf314ef8aa to disappear Apr 25 21:27:04.394: INFO: Pod pod-configmaps-064c8f77-0dd5-43bc-bff1-2dbf314ef8aa no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:27:04.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5580" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":866,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:27:04.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:27:04.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9542' Apr 25 21:27:04.792: INFO: stderr: "" Apr 25 21:27:04.792: INFO: stdout: "replicationcontroller/agnhost-master created\n" Apr 25 21:27:04.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9542' Apr 25 21:27:05.084: INFO: stderr: "" Apr 25 21:27:05.084: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 25 21:27:06.095: INFO: Selector matched 1 pods for map[app:agnhost] Apr 25 21:27:06.095: INFO: Found 0 / 1 Apr 25 21:27:07.089: INFO: Selector matched 1 pods for map[app:agnhost] Apr 25 21:27:07.089: INFO: Found 0 / 1 Apr 25 21:27:08.089: INFO: Selector matched 1 pods for map[app:agnhost] Apr 25 21:27:08.089: INFO: Found 1 / 1 Apr 25 21:27:08.089: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 25 21:27:08.092: INFO: Selector matched 1 pods for map[app:agnhost] Apr 25 21:27:08.092: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 25 21:27:08.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-jqsps --namespace=kubectl-9542' Apr 25 21:27:08.202: INFO: stderr: "" Apr 25 21:27:08.203: INFO: stdout: "Name: agnhost-master-jqsps\nNamespace: kubectl-9542\nPriority: 0\nNode: jerma-worker2/172.17.0.8\nStart Time: Sat, 25 Apr 2020 21:27:04 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.107\nIPs:\n IP: 10.244.2.107\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://fa54c52e909f4aab6669dfffb25d264eeb8734dda284a2646436aa9f0f6f4782\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sat, 25 Apr 2020 21:27:06 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-9x2zp (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-9x2zp:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-9x2zp\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-9542/agnhost-master-jqsps to jerma-worker2\n Normal Pulled 2s kubelet, jerma-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 2s kubelet, jerma-worker2 Created container agnhost-master\n Normal Started 2s kubelet, jerma-worker2 Started container agnhost-master\n" Apr 25 21:27:08.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-9542' Apr 25 21:27:08.341: INFO: stderr: "" Apr 25 21:27:08.341: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9542\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-jqsps\n" Apr 25 21:27:08.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-9542' Apr 25 21:27:08.442: INFO: stderr: "" Apr 25 21:27:08.442: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-9542\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.111.9.211\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.107:6379\nSession Affinity: None\nEvents: \n" Apr 25 21:27:08.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' Apr 25 21:27:08.558: INFO: stderr: "" Apr 25 21:27:08.558: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Sat, 25 Apr 2020 21:27:04 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sat, 25 Apr 2020 21:22:50 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sat, 25 Apr 2020 21:22:50 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sat, 25 Apr 2020 21:22:50 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sat, 25 Apr 2020 21:22:50 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 41d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 41d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 41d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 41d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 41d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 41d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 41d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 41d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 41d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 25 21:27:08.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9542' Apr 25 21:27:08.673: INFO: stderr: "" Apr 25 21:27:08.673: INFO: stdout: "Name: kubectl-9542\nLabels: e2e-framework=kubectl\n e2e-run=c2eede41-49d8-4d0e-b302-fbbda5718c8c\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:27:08.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9542" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":61,"skipped":907,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:27:08.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 25 21:27:08.768: INFO: Waiting up to 5m0s for pod "pod-8f9731fe-abdf-4084-8867-454e366cfffd" in namespace "emptydir-5482" to be "success or failure" Apr 25 21:27:08.771: INFO: Pod "pod-8f9731fe-abdf-4084-8867-454e366cfffd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.091274ms Apr 25 21:27:10.776: INFO: Pod "pod-8f9731fe-abdf-4084-8867-454e366cfffd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007330426s Apr 25 21:27:12.779: INFO: Pod "pod-8f9731fe-abdf-4084-8867-454e366cfffd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010942619s STEP: Saw pod success Apr 25 21:27:12.779: INFO: Pod "pod-8f9731fe-abdf-4084-8867-454e366cfffd" satisfied condition "success or failure" Apr 25 21:27:12.782: INFO: Trying to get logs from node jerma-worker2 pod pod-8f9731fe-abdf-4084-8867-454e366cfffd container test-container: STEP: delete the pod Apr 25 21:27:12.823: INFO: Waiting for pod pod-8f9731fe-abdf-4084-8867-454e366cfffd to disappear Apr 25 21:27:12.837: INFO: Pod pod-8f9731fe-abdf-4084-8867-454e366cfffd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:27:12.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5482" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":914,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:27:12.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-46c7bd12-7740-44c7-83e5-1ba2cd27dfc5 STEP: Creating a pod to test consume secrets Apr 25 21:27:12.921: INFO: Waiting up to 5m0s for pod "pod-secrets-591b1a57-c659-4836-9ad0-c78f4bdeb19d" in namespace "secrets-5908" to be "success or failure" Apr 25 21:27:12.926: INFO: Pod "pod-secrets-591b1a57-c659-4836-9ad0-c78f4bdeb19d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.787167ms Apr 25 21:27:14.931: INFO: Pod "pod-secrets-591b1a57-c659-4836-9ad0-c78f4bdeb19d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009004258s Apr 25 21:27:16.935: INFO: Pod "pod-secrets-591b1a57-c659-4836-9ad0-c78f4bdeb19d": Phase="Running", Reason="", readiness=true. Elapsed: 4.013389681s Apr 25 21:27:18.939: INFO: Pod "pod-secrets-591b1a57-c659-4836-9ad0-c78f4bdeb19d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017229658s STEP: Saw pod success Apr 25 21:27:18.939: INFO: Pod "pod-secrets-591b1a57-c659-4836-9ad0-c78f4bdeb19d" satisfied condition "success or failure" Apr 25 21:27:18.942: INFO: Trying to get logs from node jerma-worker pod pod-secrets-591b1a57-c659-4836-9ad0-c78f4bdeb19d container secret-volume-test: STEP: delete the pod Apr 25 21:27:18.986: INFO: Waiting for pod pod-secrets-591b1a57-c659-4836-9ad0-c78f4bdeb19d to disappear Apr 25 21:27:19.011: INFO: Pod pod-secrets-591b1a57-c659-4836-9ad0-c78f4bdeb19d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:27:19.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5908" for this suite. • [SLOW TEST:6.171 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":958,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:27:19.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command Apr 25 21:27:19.085: INFO: Waiting up to 5m0s for pod "var-expansion-f81654be-9347-4a3c-acb7-8e2446f60d6e" in namespace "var-expansion-5537" to be "success or failure" Apr 25 21:27:19.088: INFO: Pod "var-expansion-f81654be-9347-4a3c-acb7-8e2446f60d6e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.656815ms Apr 25 21:27:21.095: INFO: Pod "var-expansion-f81654be-9347-4a3c-acb7-8e2446f60d6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01003526s Apr 25 21:27:23.099: INFO: Pod "var-expansion-f81654be-9347-4a3c-acb7-8e2446f60d6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014405406s STEP: Saw pod success Apr 25 21:27:23.099: INFO: Pod "var-expansion-f81654be-9347-4a3c-acb7-8e2446f60d6e" satisfied condition "success or failure" Apr 25 21:27:23.102: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-f81654be-9347-4a3c-acb7-8e2446f60d6e container dapi-container: STEP: delete the pod Apr 25 21:27:23.150: INFO: Waiting for pod var-expansion-f81654be-9347-4a3c-acb7-8e2446f60d6e to disappear Apr 25 21:27:23.156: INFO: Pod var-expansion-f81654be-9347-4a3c-acb7-8e2446f60d6e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:27:23.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5537" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":959,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:27:23.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 25 21:27:23.647: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 25 21:27:25.657: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446843, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446843, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446843, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446843, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 21:27:28.693: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:27:28.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:27:30.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9952" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.965 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":65,"skipped":975,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:27:30.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:27:44.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7163" for this suite. • [SLOW TEST:14.086 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":66,"skipped":989,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:27:44.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5643 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 25 21:27:44.312: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 25 21:28:06.434: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.116:8080/dial?request=hostname&protocol=udp&host=10.244.1.206&port=8081&tries=1'] Namespace:pod-network-test-5643 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 21:28:06.434: INFO: >>> kubeConfig: /root/.kube/config I0425 21:28:06.469764 6 log.go:172] (0xc001ae7080) (0xc001e33180) Create stream I0425 21:28:06.469792 6 log.go:172] (0xc001ae7080) (0xc001e33180) Stream added, broadcasting: 1 I0425 21:28:06.471563 6 log.go:172] (0xc001ae7080) Reply frame received for 1 I0425 21:28:06.471619 6 log.go:172] (0xc001ae7080) (0xc001a91f40) Create stream I0425 21:28:06.471636 6 log.go:172] (0xc001ae7080) (0xc001a91f40) Stream added, broadcasting: 3 I0425 21:28:06.472619 6 log.go:172] (0xc001ae7080) Reply frame received for 3 I0425 21:28:06.472699 6 log.go:172] (0xc001ae7080) (0xc0022d2000) Create stream I0425 21:28:06.472721 6 log.go:172] (0xc001ae7080) (0xc0022d2000) Stream added, broadcasting: 5 I0425 21:28:06.474020 6 log.go:172] (0xc001ae7080) Reply frame received for 5 I0425 21:28:06.577756 6 log.go:172] (0xc001ae7080) Data frame received for 3 I0425 21:28:06.577795 6 log.go:172] (0xc001a91f40) (3) Data frame handling I0425 21:28:06.577822 6 log.go:172] (0xc001a91f40) (3) Data frame sent I0425 21:28:06.578433 6 log.go:172] (0xc001ae7080) Data frame received for 3 I0425 21:28:06.578460 6 log.go:172] (0xc001a91f40) (3) Data frame handling I0425 21:28:06.578527 6 log.go:172] (0xc001ae7080) Data frame received for 5 I0425 21:28:06.578551 6 log.go:172] (0xc0022d2000) (5) Data frame handling I0425 21:28:06.580250 6 log.go:172] (0xc001ae7080) Data frame received for 1 I0425 21:28:06.580284 6 log.go:172] (0xc001e33180) (1) Data frame handling I0425 21:28:06.580302 6 log.go:172] (0xc001e33180) (1) Data frame sent I0425 21:28:06.580327 6 log.go:172] (0xc001ae7080) (0xc001e33180) Stream removed, broadcasting: 1 I0425 21:28:06.580361 6 log.go:172] (0xc001ae7080) Go away received I0425 21:28:06.580507 6 log.go:172] (0xc001ae7080) (0xc001e33180) Stream removed, broadcasting: 1 I0425 21:28:06.580532 6 log.go:172] (0xc001ae7080) (0xc001a91f40) Stream removed, broadcasting: 3 I0425 21:28:06.580549 6 log.go:172] (0xc001ae7080) (0xc0022d2000) Stream removed, broadcasting: 5 Apr 25 21:28:06.580: INFO: Waiting for responses: map[] Apr 25 21:28:06.583: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.116:8080/dial?request=hostname&protocol=udp&host=10.244.2.115&port=8081&tries=1'] Namespace:pod-network-test-5643 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 21:28:06.583: INFO: >>> kubeConfig: /root/.kube/config I0425 21:28:06.617452 6 log.go:172] (0xc002262b00) (0xc0022d2460) Create stream I0425 21:28:06.617484 6 log.go:172] (0xc002262b00) (0xc0022d2460) Stream added, broadcasting: 1 I0425 21:28:06.619457 6 log.go:172] (0xc002262b00) Reply frame received for 1 I0425 21:28:06.619513 6 log.go:172] (0xc002262b00) (0xc0022d2500) Create stream I0425 21:28:06.619536 6 log.go:172] (0xc002262b00) (0xc0022d2500) Stream added, broadcasting: 3 I0425 21:28:06.621463 6 log.go:172] (0xc002262b00) Reply frame received for 3 I0425 21:28:06.621509 6 log.go:172] (0xc002262b00) (0xc00231c000) Create stream I0425 21:28:06.621526 6 log.go:172] (0xc002262b00) (0xc00231c000) Stream added, broadcasting: 5 I0425 21:28:06.622647 6 log.go:172] (0xc002262b00) Reply frame received for 5 I0425 21:28:06.697861 6 log.go:172] (0xc002262b00) Data frame received for 3 I0425 21:28:06.697969 6 log.go:172] (0xc0022d2500) (3) Data frame handling I0425 21:28:06.698056 6 log.go:172] (0xc0022d2500) (3) Data frame sent I0425 21:28:06.698451 6 log.go:172] (0xc002262b00) Data frame received for 3 I0425 21:28:06.698486 6 log.go:172] (0xc0022d2500) (3) Data frame handling I0425 21:28:06.698512 6 log.go:172] (0xc002262b00) Data frame received for 5 I0425 21:28:06.698537 6 log.go:172] (0xc00231c000) (5) Data frame handling I0425 21:28:06.700133 6 log.go:172] (0xc002262b00) Data frame received for 1 I0425 21:28:06.700157 6 log.go:172] (0xc0022d2460) (1) Data frame handling I0425 21:28:06.700197 6 log.go:172] (0xc0022d2460) (1) Data frame sent I0425 21:28:06.700216 6 log.go:172] (0xc002262b00) (0xc0022d2460) Stream removed, broadcasting: 1 I0425 21:28:06.700296 6 log.go:172] (0xc002262b00) (0xc0022d2460) Stream removed, broadcasting: 1 I0425 21:28:06.700325 6 log.go:172] (0xc002262b00) (0xc0022d2500) Stream removed, broadcasting: 3 I0425 21:28:06.700342 6 log.go:172] (0xc002262b00) (0xc00231c000) Stream removed, broadcasting: 5 I0425 21:28:06.700401 6 log.go:172] (0xc002262b00) Go away received Apr 25 21:28:06.700: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:28:06.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5643" for this suite. • [SLOW TEST:22.490 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1008,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:28:06.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 25 21:28:06.799: INFO: Waiting up to 5m0s for pod "pod-e31d29e4-16b1-46e8-a577-aadd86394e78" in namespace "emptydir-7330" to be "success or failure" Apr 25 21:28:06.815: INFO: Pod "pod-e31d29e4-16b1-46e8-a577-aadd86394e78": Phase="Pending", Reason="", readiness=false. Elapsed: 15.728722ms Apr 25 21:28:08.818: INFO: Pod "pod-e31d29e4-16b1-46e8-a577-aadd86394e78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018643182s Apr 25 21:28:10.821: INFO: Pod "pod-e31d29e4-16b1-46e8-a577-aadd86394e78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022053724s STEP: Saw pod success Apr 25 21:28:10.821: INFO: Pod "pod-e31d29e4-16b1-46e8-a577-aadd86394e78" satisfied condition "success or failure" Apr 25 21:28:10.844: INFO: Trying to get logs from node jerma-worker pod pod-e31d29e4-16b1-46e8-a577-aadd86394e78 container test-container: STEP: delete the pod Apr 25 21:28:10.880: INFO: Waiting for pod pod-e31d29e4-16b1-46e8-a577-aadd86394e78 to disappear Apr 25 21:28:10.893: INFO: Pod pod-e31d29e4-16b1-46e8-a577-aadd86394e78 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:28:10.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7330" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1033,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:28:10.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-f7a3401a-111d-4df1-b351-8af997a7f0de STEP: Creating a pod to test consume secrets Apr 25 21:28:11.033: INFO: Waiting up to 5m0s for pod "pod-secrets-96ca1059-6126-48a9-8def-06323a29b61d" in namespace "secrets-3843" to be "success or failure" Apr 25 21:28:11.036: INFO: Pod "pod-secrets-96ca1059-6126-48a9-8def-06323a29b61d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.915705ms Apr 25 21:28:13.053: INFO: Pod "pod-secrets-96ca1059-6126-48a9-8def-06323a29b61d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01975193s Apr 25 21:28:15.066: INFO: Pod "pod-secrets-96ca1059-6126-48a9-8def-06323a29b61d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032184498s STEP: Saw pod success Apr 25 21:28:15.066: INFO: Pod "pod-secrets-96ca1059-6126-48a9-8def-06323a29b61d" satisfied condition "success or failure" Apr 25 21:28:15.068: INFO: Trying to get logs from node jerma-worker pod pod-secrets-96ca1059-6126-48a9-8def-06323a29b61d container secret-volume-test: STEP: delete the pod Apr 25 21:28:15.086: INFO: Waiting for pod pod-secrets-96ca1059-6126-48a9-8def-06323a29b61d to disappear Apr 25 21:28:15.091: INFO: Pod pod-secrets-96ca1059-6126-48a9-8def-06323a29b61d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:28:15.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3843" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1054,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:28:15.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 25 21:28:15.244: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2254 /api/v1/namespaces/watch-2254/configmaps/e2e-watch-test-resource-version 19ec1bee-d9fe-4ebd-972e-53d746a163cd 11016575 0 2020-04-25 21:28:15 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 25 21:28:15.244: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-2254 /api/v1/namespaces/watch-2254/configmaps/e2e-watch-test-resource-version 19ec1bee-d9fe-4ebd-972e-53d746a163cd 11016576 0 2020-04-25 21:28:15 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:28:15.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2254" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":70,"skipped":1055,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:28:15.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 21:28:15.926: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 21:28:17.936: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446895, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446895, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446895, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723446895, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 21:28:20.968: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:28:20.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3068-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:28:22.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2295" for this suite. STEP: Destroying namespace "webhook-2295-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.035 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":71,"skipped":1068,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:28:22.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:28:22.350: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 25 21:28:25.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4139 create -f -' Apr 25 21:28:28.333: INFO: stderr: "" Apr 25 21:28:28.333: INFO: stdout: "e2e-test-crd-publish-openapi-9280-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 25 21:28:28.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4139 delete e2e-test-crd-publish-openapi-9280-crds test-cr' Apr 25 21:28:28.450: INFO: stderr: "" Apr 25 21:28:28.450: INFO: stdout: "e2e-test-crd-publish-openapi-9280-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 25 21:28:28.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4139 apply -f -' Apr 25 21:28:28.710: INFO: stderr: "" Apr 25 21:28:28.710: INFO: stdout: "e2e-test-crd-publish-openapi-9280-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 25 21:28:28.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4139 delete e2e-test-crd-publish-openapi-9280-crds test-cr' Apr 25 21:28:28.808: INFO: stderr: "" Apr 25 21:28:28.808: INFO: stdout: "e2e-test-crd-publish-openapi-9280-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Apr 25 21:28:28.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9280-crds' Apr 25 21:28:29.071: INFO: stderr: "" Apr 25 21:28:29.071: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9280-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:28:31.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4139" for this suite. • [SLOW TEST:9.716 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":72,"skipped":1105,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:28:32.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-vc99 STEP: Creating a pod to test atomic-volume-subpath Apr 25 21:28:32.285: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-vc99" in namespace "subpath-293" to be "success or failure" Apr 25 21:28:32.313: INFO: Pod "pod-subpath-test-downwardapi-vc99": Phase="Pending", Reason="", readiness=false. Elapsed: 28.145758ms Apr 25 21:28:34.317: INFO: Pod "pod-subpath-test-downwardapi-vc99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031938703s Apr 25 21:28:36.322: INFO: Pod "pod-subpath-test-downwardapi-vc99": Phase="Running", Reason="", readiness=true. Elapsed: 4.036334417s Apr 25 21:28:38.326: INFO: Pod "pod-subpath-test-downwardapi-vc99": Phase="Running", Reason="", readiness=true. Elapsed: 6.040837855s Apr 25 21:28:40.330: INFO: Pod "pod-subpath-test-downwardapi-vc99": Phase="Running", Reason="", readiness=true. Elapsed: 8.045247299s Apr 25 21:28:42.335: INFO: Pod "pod-subpath-test-downwardapi-vc99": Phase="Running", Reason="", readiness=true. Elapsed: 10.049751124s Apr 25 21:28:44.339: INFO: Pod "pod-subpath-test-downwardapi-vc99": Phase="Running", Reason="", readiness=true. Elapsed: 12.054117107s Apr 25 21:28:46.343: INFO: Pod "pod-subpath-test-downwardapi-vc99": Phase="Running", Reason="", readiness=true. Elapsed: 14.058130264s Apr 25 21:28:48.348: INFO: Pod "pod-subpath-test-downwardapi-vc99": Phase="Running", Reason="", readiness=true. Elapsed: 16.062389117s Apr 25 21:28:50.352: INFO: Pod "pod-subpath-test-downwardapi-vc99": Phase="Running", Reason="", readiness=true. Elapsed: 18.066434936s Apr 25 21:28:52.356: INFO: Pod "pod-subpath-test-downwardapi-vc99": Phase="Running", Reason="", readiness=true. Elapsed: 20.070801581s Apr 25 21:28:54.359: INFO: Pod "pod-subpath-test-downwardapi-vc99": Phase="Running", Reason="", readiness=true. Elapsed: 22.07411695s Apr 25 21:28:56.364: INFO: Pod "pod-subpath-test-downwardapi-vc99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.07841251s STEP: Saw pod success Apr 25 21:28:56.364: INFO: Pod "pod-subpath-test-downwardapi-vc99" satisfied condition "success or failure" Apr 25 21:28:56.367: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-vc99 container test-container-subpath-downwardapi-vc99: STEP: delete the pod Apr 25 21:28:56.387: INFO: Waiting for pod pod-subpath-test-downwardapi-vc99 to disappear Apr 25 21:28:56.391: INFO: Pod pod-subpath-test-downwardapi-vc99 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-vc99 Apr 25 21:28:56.391: INFO: Deleting pod "pod-subpath-test-downwardapi-vc99" in namespace "subpath-293" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:28:56.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-293" for this suite. • [SLOW TEST:24.423 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":73,"skipped":1127,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:28:56.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-418263ff-039f-4456-9fc6-51e0bbd5ec97 STEP: Creating secret with name secret-projected-all-test-volume-ce36aef0-1e16-44c7-afb5-b575739135a5 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 25 21:28:56.490: INFO: Waiting up to 5m0s for pod "projected-volume-c36a5af6-581f-49c2-91be-6d452e1b1d3d" in namespace "projected-6268" to be "success or failure" Apr 25 21:28:56.493: INFO: Pod "projected-volume-c36a5af6-581f-49c2-91be-6d452e1b1d3d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.06082ms Apr 25 21:28:58.562: INFO: Pod "projected-volume-c36a5af6-581f-49c2-91be-6d452e1b1d3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07198058s Apr 25 21:29:00.566: INFO: Pod "projected-volume-c36a5af6-581f-49c2-91be-6d452e1b1d3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075998597s STEP: Saw pod success Apr 25 21:29:00.566: INFO: Pod "projected-volume-c36a5af6-581f-49c2-91be-6d452e1b1d3d" satisfied condition "success or failure" Apr 25 21:29:00.569: INFO: Trying to get logs from node jerma-worker pod projected-volume-c36a5af6-581f-49c2-91be-6d452e1b1d3d container projected-all-volume-test: STEP: delete the pod Apr 25 21:29:00.605: INFO: Waiting for pod projected-volume-c36a5af6-581f-49c2-91be-6d452e1b1d3d to disappear Apr 25 21:29:00.635: INFO: Pod projected-volume-c36a5af6-581f-49c2-91be-6d452e1b1d3d no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:29:00.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6268" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1134,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:29:00.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-c326e71d-2b05-4f9f-b6eb-28fc45e99046 STEP: Creating a pod to test consume configMaps Apr 25 21:29:00.721: INFO: Waiting up to 5m0s for pod "pod-configmaps-f926f9ae-4652-42e5-9999-0d2ee08f084d" in namespace "configmap-9017" to be "success or failure" Apr 25 21:29:00.779: INFO: Pod "pod-configmaps-f926f9ae-4652-42e5-9999-0d2ee08f084d": Phase="Pending", Reason="", readiness=false. Elapsed: 58.028266ms Apr 25 21:29:02.793: INFO: Pod "pod-configmaps-f926f9ae-4652-42e5-9999-0d2ee08f084d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072411852s Apr 25 21:29:04.798: INFO: Pod "pod-configmaps-f926f9ae-4652-42e5-9999-0d2ee08f084d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.076594667s STEP: Saw pod success Apr 25 21:29:04.798: INFO: Pod "pod-configmaps-f926f9ae-4652-42e5-9999-0d2ee08f084d" satisfied condition "success or failure" Apr 25 21:29:04.801: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-f926f9ae-4652-42e5-9999-0d2ee08f084d container configmap-volume-test: STEP: delete the pod Apr 25 21:29:04.864: INFO: Waiting for pod pod-configmaps-f926f9ae-4652-42e5-9999-0d2ee08f084d to disappear Apr 25 21:29:04.870: INFO: Pod pod-configmaps-f926f9ae-4652-42e5-9999-0d2ee08f084d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:29:04.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9017" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":75,"skipped":1154,"failed":0} SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:29:04.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 25 21:29:04.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7159' Apr 25 21:29:05.085: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 25 21:29:05.085: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 Apr 25 21:29:05.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-7159' Apr 25 21:29:05.220: INFO: stderr: "" Apr 25 21:29:05.220: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:29:05.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7159" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":76,"skipped":1165,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:29:05.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Apr 25 21:29:05.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2677' Apr 25 21:29:05.641: INFO: stderr: "" Apr 25 21:29:05.641: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 25 21:29:06.646: INFO: Selector matched 1 pods for map[app:agnhost] Apr 25 21:29:06.646: INFO: Found 0 / 1 Apr 25 21:29:07.645: INFO: Selector matched 1 pods for map[app:agnhost] Apr 25 21:29:07.645: INFO: Found 0 / 1 Apr 25 21:29:08.646: INFO: Selector matched 1 pods for map[app:agnhost] Apr 25 21:29:08.646: INFO: Found 0 / 1 Apr 25 21:29:09.646: INFO: Selector matched 1 pods for map[app:agnhost] Apr 25 21:29:09.646: INFO: Found 1 / 1 Apr 25 21:29:09.647: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 25 21:29:09.650: INFO: Selector matched 1 pods for map[app:agnhost] Apr 25 21:29:09.650: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 25 21:29:09.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-nv2n5 --namespace=kubectl-2677 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 25 21:29:09.751: INFO: stderr: "" Apr 25 21:29:09.751: INFO: stdout: "pod/agnhost-master-nv2n5 patched\n" STEP: checking annotations Apr 25 21:29:09.754: INFO: Selector matched 1 pods for map[app:agnhost] Apr 25 21:29:09.754: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:29:09.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2677" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":77,"skipped":1189,"failed":0} SSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:29:09.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-7915 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7915 to expose endpoints map[] Apr 25 21:29:09.895: INFO: Get endpoints failed (18.903641ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 25 21:29:10.899: INFO: successfully validated that service multi-endpoint-test in namespace services-7915 exposes endpoints map[] (1.023093779s elapsed) STEP: Creating pod pod1 in namespace services-7915 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7915 to expose endpoints map[pod1:[100]] Apr 25 21:29:13.939: INFO: successfully validated that service multi-endpoint-test in namespace services-7915 exposes endpoints map[pod1:[100]] (3.032545648s elapsed) STEP: Creating pod pod2 in namespace services-7915 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7915 to expose endpoints map[pod1:[100] pod2:[101]] Apr 25 21:29:16.987: INFO: successfully validated that service multi-endpoint-test in namespace services-7915 exposes endpoints map[pod1:[100] pod2:[101]] (3.042539319s elapsed) STEP: Deleting pod pod1 in namespace services-7915 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7915 to expose endpoints map[pod2:[101]] Apr 25 21:29:18.025: INFO: successfully validated that service multi-endpoint-test in namespace services-7915 exposes endpoints map[pod2:[101]] (1.03370722s elapsed) STEP: Deleting pod pod2 in namespace services-7915 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7915 to expose endpoints map[] Apr 25 21:29:19.042: INFO: successfully validated that service multi-endpoint-test in namespace services-7915 exposes endpoints map[] (1.01203173s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:29:19.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7915" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.446 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":78,"skipped":1194,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:29:19.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 25 21:29:19.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5435' Apr 25 21:29:19.488: INFO: stderr: "" Apr 25 21:29:19.488: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created Apr 25 21:29:24.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5435 -o json' Apr 25 21:29:24.644: INFO: stderr: "" Apr 25 21:29:24.644: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-25T21:29:19Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5435\",\n \"resourceVersion\": \"11017084\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5435/pods/e2e-test-httpd-pod\",\n \"uid\": \"e8d9c9b9-41f8-42c3-8b05-78269a2ac0e2\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-c6kv6\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-c6kv6\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-c6kv6\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-25T21:29:19Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-25T21:29:22Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-25T21:29:22Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-25T21:29:19Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://5e8c88c6f38ec2e1de2d636ceb5b0b319f64a6623f8a5f0c03601e721116585c\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-25T21:29:21Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.213\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.213\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-25T21:29:19Z\"\n }\n}\n" STEP: replace the image in the pod Apr 25 21:29:24.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5435' Apr 25 21:29:24.906: INFO: stderr: "" Apr 25 21:29:24.906: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 Apr 25 21:29:24.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5435' Apr 25 21:29:27.910: INFO: stderr: "" Apr 25 21:29:27.910: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:29:27.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5435" for this suite. • [SLOW TEST:8.717 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":79,"skipped":1204,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:29:27.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4690.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-4690.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4690.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-4690.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-4690.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4690.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 25 21:29:34.075: INFO: DNS probes using dns-4690/dns-test-58175e86-837f-430d-8efe-bc511639bb0c succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:29:34.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4690" for this suite. • [SLOW TEST:6.257 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":80,"skipped":1213,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:29:34.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:29:38.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4481" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1215,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:29:38.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:29:42.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3059" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1230,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:29:42.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:29:53.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-469" for this suite. • [SLOW TEST:11.174 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":83,"skipped":1241,"failed":0} SSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:29:53.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:29:53.605: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 25 21:29:55.670: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:29:56.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1172" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":84,"skipped":1248,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:29:56.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:29:56.855: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:30:01.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6035" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1261,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:30:01.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 21:30:01.987: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 21:30:03.998: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447001, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447001, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447002, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447001, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 21:30:07.028: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Apr 25 21:30:07.055: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:30:07.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2074" for this suite. STEP: Destroying namespace "webhook-2074-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.776 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":86,"skipped":1315,"failed":0} SSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:30:07.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 25 21:30:07.308: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 25 21:30:07.317: INFO: Waiting for terminating namespaces to be deleted... Apr 25 21:30:07.320: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 25 21:30:07.326: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 21:30:07.326: INFO: Container kindnet-cni ready: true, restart count 0 Apr 25 21:30:07.326: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 21:30:07.326: INFO: Container kube-proxy ready: true, restart count 0 Apr 25 21:30:07.326: INFO: busybox-readonly-fsf07651a5-99b5-420f-b69b-d3bcafd17689 from kubelet-test-3059 started at 2020-04-25 21:29:38 +0000 UTC (1 container statuses recorded) Apr 25 21:30:07.326: INFO: Container busybox-readonly-fsf07651a5-99b5-420f-b69b-d3bcafd17689 ready: true, restart count 0 Apr 25 21:30:07.326: INFO: busybox-host-aliases3980c944-f0c2-4e87-84dd-d8ca98cd263c from kubelet-test-4481 started at 2020-04-25 21:29:34 +0000 UTC (1 container statuses recorded) Apr 25 21:30:07.326: INFO: Container busybox-host-aliases3980c944-f0c2-4e87-84dd-d8ca98cd263c ready: true, restart count 0 Apr 25 21:30:07.326: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 25 21:30:07.334: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 21:30:07.334: INFO: Container kindnet-cni ready: true, restart count 0 Apr 25 21:30:07.334: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 25 21:30:07.334: INFO: Container kube-bench ready: false, restart count 0 Apr 25 21:30:07.334: INFO: pod-logs-websocket-973cb3fd-d17b-4250-bb1d-97ce43be6bef from pods-6035 started at 2020-04-25 21:29:57 +0000 UTC (1 container statuses recorded) Apr 25 21:30:07.334: INFO: Container main ready: true, restart count 0 Apr 25 21:30:07.334: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 21:30:07.334: INFO: Container kube-proxy ready: true, restart count 0 Apr 25 21:30:07.334: INFO: sample-webhook-deployment-5f65f8c764-j9skg from webhook-2074 started at 2020-04-25 21:30:02 +0000 UTC (1 container statuses recorded) Apr 25 21:30:07.334: INFO: Container sample-webhook ready: true, restart count 0 Apr 25 21:30:07.334: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 25 21:30:07.334: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16092d59ce27edd9], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:30:08.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5163" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":87,"skipped":1322,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:30:08.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-a0e452f9-75ed-4cf6-9af4-5174973cffaa STEP: Creating secret with name s-test-opt-upd-a35eb902-e071-400e-be14-316c472956af STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a0e452f9-75ed-4cf6-9af4-5174973cffaa STEP: Updating secret s-test-opt-upd-a35eb902-e071-400e-be14-316c472956af STEP: Creating secret with name s-test-opt-create-ae41cac3-c5b1-490d-afab-55d8f951ad3a STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:30:18.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6703" for this suite. • [SLOW TEST:10.211 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1352,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:30:18.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0425 21:30:58.912789 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 25 21:30:58.912: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:30:58.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1682" for this suite. • [SLOW TEST:40.338 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":89,"skipped":1367,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:30:58.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-2bd53573-9115-49e3-b60a-229b63d1ebde STEP: Creating a pod to test consume secrets Apr 25 21:30:58.984: INFO: Waiting up to 5m0s for pod "pod-secrets-579db43d-3252-438b-bf5f-98f08df07f69" in namespace "secrets-834" to be "success or failure" Apr 25 21:30:59.002: INFO: Pod "pod-secrets-579db43d-3252-438b-bf5f-98f08df07f69": Phase="Pending", Reason="", readiness=false. Elapsed: 18.022652ms Apr 25 21:31:01.005: INFO: Pod "pod-secrets-579db43d-3252-438b-bf5f-98f08df07f69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021799101s Apr 25 21:31:03.010: INFO: Pod "pod-secrets-579db43d-3252-438b-bf5f-98f08df07f69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026130282s STEP: Saw pod success Apr 25 21:31:03.010: INFO: Pod "pod-secrets-579db43d-3252-438b-bf5f-98f08df07f69" satisfied condition "success or failure" Apr 25 21:31:03.013: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-579db43d-3252-438b-bf5f-98f08df07f69 container secret-volume-test: STEP: delete the pod Apr 25 21:31:03.031: INFO: Waiting for pod pod-secrets-579db43d-3252-438b-bf5f-98f08df07f69 to disappear Apr 25 21:31:03.055: INFO: Pod pod-secrets-579db43d-3252-438b-bf5f-98f08df07f69 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:31:03.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-834" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1396,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:31:03.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 25 21:31:03.147: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 25 21:31:03.170: INFO: Waiting for terminating namespaces to be deleted... Apr 25 21:31:03.172: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 25 21:31:03.180: INFO: simpletest.rc-8497p from gc-1682 started at 2020-04-25 21:30:18 +0000 UTC (1 container statuses recorded) Apr 25 21:31:03.180: INFO: Container nginx ready: true, restart count 0 Apr 25 21:31:03.180: INFO: simpletest.rc-ms7hm from gc-1682 started at 2020-04-25 21:30:18 +0000 UTC (1 container statuses recorded) Apr 25 21:31:03.180: INFO: Container nginx ready: true, restart count 0 Apr 25 21:31:03.180: INFO: simpletest.rc-7zchz from gc-1682 started at 2020-04-25 21:30:19 +0000 UTC (1 container statuses recorded) Apr 25 21:31:03.180: INFO: Container nginx ready: true, restart count 0 Apr 25 21:31:03.180: INFO: simpletest.rc-fjpxx from gc-1682 started at 2020-04-25 21:30:18 +0000 UTC (1 container statuses recorded) Apr 25 21:31:03.180: INFO: Container nginx ready: true, restart count 0 Apr 25 21:31:03.180: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 21:31:03.180: INFO: Container kindnet-cni ready: true, restart count 0 Apr 25 21:31:03.180: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 21:31:03.180: INFO: Container kube-proxy ready: true, restart count 0 Apr 25 21:31:03.180: INFO: simpletest.rc-7ssf8 from gc-1682 started at 2020-04-25 21:30:18 +0000 UTC (1 container statuses recorded) Apr 25 21:31:03.180: INFO: Container nginx ready: true, restart count 0 Apr 25 21:31:03.180: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 25 21:31:03.186: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 25 21:31:03.186: INFO: Container kube-hunter ready: false, restart count 0 Apr 25 21:31:03.186: INFO: simpletest.rc-ghq5p from gc-1682 started at 2020-04-25 21:30:18 +0000 UTC (1 container statuses recorded) Apr 25 21:31:03.186: INFO: Container nginx ready: true, restart count 0 Apr 25 21:31:03.186: INFO: simpletest.rc-nfrhw from gc-1682 started at 2020-04-25 21:30:19 +0000 UTC (1 container statuses recorded) Apr 25 21:31:03.186: INFO: Container nginx ready: true, restart count 0 Apr 25 21:31:03.186: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 21:31:03.186: INFO: Container kindnet-cni ready: true, restart count 0 Apr 25 21:31:03.186: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 25 21:31:03.186: INFO: Container kube-bench ready: false, restart count 0 Apr 25 21:31:03.186: INFO: simpletest.rc-wvp4k from gc-1682 started at 2020-04-25 21:30:18 +0000 UTC (1 container statuses recorded) Apr 25 21:31:03.186: INFO: Container nginx ready: true, restart count 0 Apr 25 21:31:03.186: INFO: simpletest.rc-9wvdk from gc-1682 started at 2020-04-25 21:30:19 +0000 UTC (1 container statuses recorded) Apr 25 21:31:03.186: INFO: Container nginx ready: true, restart count 0 Apr 25 21:31:03.186: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 21:31:03.186: INFO: Container kube-proxy ready: true, restart count 0 Apr 25 21:31:03.186: INFO: simpletest.rc-j4852 from gc-1682 started at 2020-04-25 21:30:18 +0000 UTC (1 container statuses recorded) Apr 25 21:31:03.186: INFO: Container nginx ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-30cd2058-e58a-42a8-9c2a-9d5c073889d4 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-30cd2058-e58a-42a8-9c2a-9d5c073889d4 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-30cd2058-e58a-42a8-9c2a-9d5c073889d4 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:31:12.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4323" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:9.062 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":91,"skipped":1436,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:31:12.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-3cbeabe2-4fc9-4f7f-b1df-f8904ac6d6b0 STEP: Creating a pod to test consume secrets Apr 25 21:31:12.260: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ee5a9b58-08dd-47d9-b167-ddda03a95580" in namespace "projected-552" to be "success or failure" Apr 25 21:31:12.270: INFO: Pod "pod-projected-secrets-ee5a9b58-08dd-47d9-b167-ddda03a95580": Phase="Pending", Reason="", readiness=false. Elapsed: 9.977922ms Apr 25 21:31:14.274: INFO: Pod "pod-projected-secrets-ee5a9b58-08dd-47d9-b167-ddda03a95580": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013942388s Apr 25 21:31:16.278: INFO: Pod "pod-projected-secrets-ee5a9b58-08dd-47d9-b167-ddda03a95580": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018198593s STEP: Saw pod success Apr 25 21:31:16.278: INFO: Pod "pod-projected-secrets-ee5a9b58-08dd-47d9-b167-ddda03a95580" satisfied condition "success or failure" Apr 25 21:31:16.281: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-ee5a9b58-08dd-47d9-b167-ddda03a95580 container projected-secret-volume-test: STEP: delete the pod Apr 25 21:31:16.302: INFO: Waiting for pod pod-projected-secrets-ee5a9b58-08dd-47d9-b167-ddda03a95580 to disappear Apr 25 21:31:16.306: INFO: Pod pod-projected-secrets-ee5a9b58-08dd-47d9-b167-ddda03a95580 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:31:16.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-552" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1437,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:31:16.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments Apr 25 21:31:16.430: INFO: Waiting up to 5m0s for pod "client-containers-d580dc2a-b6fc-4b04-888c-b4f3cb7a3da5" in namespace "containers-951" to be "success or failure" Apr 25 21:31:16.438: INFO: Pod "client-containers-d580dc2a-b6fc-4b04-888c-b4f3cb7a3da5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.368394ms Apr 25 21:31:18.442: INFO: Pod "client-containers-d580dc2a-b6fc-4b04-888c-b4f3cb7a3da5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011875052s Apr 25 21:31:20.511: INFO: Pod "client-containers-d580dc2a-b6fc-4b04-888c-b4f3cb7a3da5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081011679s STEP: Saw pod success Apr 25 21:31:20.511: INFO: Pod "client-containers-d580dc2a-b6fc-4b04-888c-b4f3cb7a3da5" satisfied condition "success or failure" Apr 25 21:31:20.514: INFO: Trying to get logs from node jerma-worker2 pod client-containers-d580dc2a-b6fc-4b04-888c-b4f3cb7a3da5 container test-container: STEP: delete the pod Apr 25 21:31:20.592: INFO: Waiting for pod client-containers-d580dc2a-b6fc-4b04-888c-b4f3cb7a3da5 to disappear Apr 25 21:31:20.598: INFO: Pod client-containers-d580dc2a-b6fc-4b04-888c-b4f3cb7a3da5 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:31:20.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-951" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1468,"failed":0} SSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:31:20.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-3d38f917-9884-493e-8b1f-0024b2b200c2 Apr 25 21:31:20.682: INFO: Pod name my-hostname-basic-3d38f917-9884-493e-8b1f-0024b2b200c2: Found 0 pods out of 1 Apr 25 21:31:25.715: INFO: Pod name my-hostname-basic-3d38f917-9884-493e-8b1f-0024b2b200c2: Found 1 pods out of 1 Apr 25 21:31:25.715: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3d38f917-9884-493e-8b1f-0024b2b200c2" are running Apr 25 21:31:25.725: INFO: Pod "my-hostname-basic-3d38f917-9884-493e-8b1f-0024b2b200c2-4l6gd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-25 21:31:20 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-25 21:31:23 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-25 21:31:23 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-25 21:31:20 +0000 UTC Reason: Message:}]) Apr 25 21:31:25.725: INFO: Trying to dial the pod Apr 25 21:31:30.735: INFO: Controller my-hostname-basic-3d38f917-9884-493e-8b1f-0024b2b200c2: Got expected result from replica 1 [my-hostname-basic-3d38f917-9884-493e-8b1f-0024b2b200c2-4l6gd]: "my-hostname-basic-3d38f917-9884-493e-8b1f-0024b2b200c2-4l6gd", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:31:30.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4651" for this suite. • [SLOW TEST:10.137 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":94,"skipped":1472,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:31:30.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 25 21:31:34.872: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:31:35.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6725" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":95,"skipped":1500,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:31:35.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 25 21:31:35.200: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:35.205: INFO: Number of nodes with available pods: 0 Apr 25 21:31:35.205: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:31:36.208: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:36.210: INFO: Number of nodes with available pods: 0 Apr 25 21:31:36.210: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:31:37.452: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:37.455: INFO: Number of nodes with available pods: 0 Apr 25 21:31:37.455: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:31:38.237: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:38.240: INFO: Number of nodes with available pods: 0 Apr 25 21:31:38.240: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:31:39.212: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:39.216: INFO: Number of nodes with available pods: 1 Apr 25 21:31:39.216: INFO: Node jerma-worker2 is running more than one daemon pod Apr 25 21:31:40.208: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:40.211: INFO: Number of nodes with available pods: 2 Apr 25 21:31:40.211: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 25 21:31:40.292: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:40.295: INFO: Number of nodes with available pods: 1 Apr 25 21:31:40.295: INFO: Node jerma-worker2 is running more than one daemon pod Apr 25 21:31:41.301: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:41.304: INFO: Number of nodes with available pods: 1 Apr 25 21:31:41.304: INFO: Node jerma-worker2 is running more than one daemon pod Apr 25 21:31:42.302: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:42.318: INFO: Number of nodes with available pods: 1 Apr 25 21:31:42.318: INFO: Node jerma-worker2 is running more than one daemon pod Apr 25 21:31:43.301: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:43.304: INFO: Number of nodes with available pods: 1 Apr 25 21:31:43.304: INFO: Node jerma-worker2 is running more than one daemon pod Apr 25 21:31:44.300: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:44.303: INFO: Number of nodes with available pods: 1 Apr 25 21:31:44.303: INFO: Node jerma-worker2 is running more than one daemon pod Apr 25 21:31:45.301: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:45.304: INFO: Number of nodes with available pods: 1 Apr 25 21:31:45.304: INFO: Node jerma-worker2 is running more than one daemon pod Apr 25 21:31:46.301: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:46.304: INFO: Number of nodes with available pods: 1 Apr 25 21:31:46.304: INFO: Node jerma-worker2 is running more than one daemon pod Apr 25 21:31:47.300: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:47.304: INFO: Number of nodes with available pods: 1 Apr 25 21:31:47.304: INFO: Node jerma-worker2 is running more than one daemon pod Apr 25 21:31:48.300: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:48.304: INFO: Number of nodes with available pods: 1 Apr 25 21:31:48.304: INFO: Node jerma-worker2 is running more than one daemon pod Apr 25 21:31:49.301: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:49.305: INFO: Number of nodes with available pods: 1 Apr 25 21:31:49.305: INFO: Node jerma-worker2 is running more than one daemon pod Apr 25 21:31:50.301: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:50.305: INFO: Number of nodes with available pods: 1 Apr 25 21:31:50.305: INFO: Node jerma-worker2 is running more than one daemon pod Apr 25 21:31:51.301: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:51.304: INFO: Number of nodes with available pods: 1 Apr 25 21:31:51.305: INFO: Node jerma-worker2 is running more than one daemon pod Apr 25 21:31:52.301: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:52.305: INFO: Number of nodes with available pods: 1 Apr 25 21:31:52.305: INFO: Node jerma-worker2 is running more than one daemon pod Apr 25 21:31:53.301: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:31:53.304: INFO: Number of nodes with available pods: 2 Apr 25 21:31:53.304: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2386, will wait for the garbage collector to delete the pods Apr 25 21:31:53.367: INFO: Deleting DaemonSet.extensions daemon-set took: 6.593704ms Apr 25 21:31:53.667: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.216543ms Apr 25 21:31:59.583: INFO: Number of nodes with available pods: 0 Apr 25 21:31:59.583: INFO: Number of running nodes: 0, number of available pods: 0 Apr 25 21:31:59.585: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2386/daemonsets","resourceVersion":"11018322"},"items":null} Apr 25 21:31:59.587: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2386/pods","resourceVersion":"11018322"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:31:59.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2386" for this suite. • [SLOW TEST:24.561 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":96,"skipped":1512,"failed":0} [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:31:59.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 25 21:31:59.668: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b9e79079-0f30-448b-9479-ce73fbf0ea76" in namespace "downward-api-9924" to be "success or failure" Apr 25 21:31:59.680: INFO: Pod "downwardapi-volume-b9e79079-0f30-448b-9479-ce73fbf0ea76": Phase="Pending", Reason="", readiness=false. Elapsed: 11.981069ms Apr 25 21:32:01.684: INFO: Pod "downwardapi-volume-b9e79079-0f30-448b-9479-ce73fbf0ea76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01661496s Apr 25 21:32:03.689: INFO: Pod "downwardapi-volume-b9e79079-0f30-448b-9479-ce73fbf0ea76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021233928s STEP: Saw pod success Apr 25 21:32:03.689: INFO: Pod "downwardapi-volume-b9e79079-0f30-448b-9479-ce73fbf0ea76" satisfied condition "success or failure" Apr 25 21:32:03.693: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-b9e79079-0f30-448b-9479-ce73fbf0ea76 container client-container: STEP: delete the pod Apr 25 21:32:03.728: INFO: Waiting for pod downwardapi-volume-b9e79079-0f30-448b-9479-ce73fbf0ea76 to disappear Apr 25 21:32:03.738: INFO: Pod downwardapi-volume-b9e79079-0f30-448b-9479-ce73fbf0ea76 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:32:03.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9924" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":97,"skipped":1512,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:32:03.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:32:03.810: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 25 21:32:08.815: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 25 21:32:08.815: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 25 21:32:10.820: INFO: Creating deployment "test-rollover-deployment" Apr 25 21:32:10.827: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 25 21:32:12.833: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 25 21:32:12.838: INFO: Ensure that both replica sets have 1 created replica Apr 25 21:32:12.843: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 25 21:32:12.849: INFO: Updating deployment test-rollover-deployment Apr 25 21:32:12.849: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 25 21:32:14.861: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 25 21:32:14.867: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 25 21:32:14.872: INFO: all replica sets need to contain the pod-template-hash label Apr 25 21:32:14.872: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447130, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447130, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447133, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447130, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 21:32:16.880: INFO: all replica sets need to contain the pod-template-hash label Apr 25 21:32:16.880: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447130, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447130, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447135, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447130, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 21:32:18.879: INFO: all replica sets need to contain the pod-template-hash label Apr 25 21:32:18.879: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447130, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447130, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447135, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447130, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 21:32:20.881: INFO: all replica sets need to contain the pod-template-hash label Apr 25 21:32:20.881: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447130, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447130, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447135, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447130, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 21:32:22.881: INFO: all replica sets need to contain the pod-template-hash label Apr 25 21:32:22.881: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447130, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447130, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447135, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447130, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 21:32:24.879: INFO: all replica sets need to contain the pod-template-hash label Apr 25 21:32:24.879: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447130, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447130, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447135, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447130, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 21:32:26.884: INFO: Apr 25 21:32:26.884: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 25 21:32:26.892: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6311 /apis/apps/v1/namespaces/deployment-6311/deployments/test-rollover-deployment 94e6f801-845a-436f-b0c1-f576646d28f8 11018518 2 2020-04-25 21:32:10 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0025f2c98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-04-25 21:32:10 +0000 UTC,LastTransitionTime:2020-04-25 21:32:10 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-04-25 21:32:25 +0000 UTC,LastTransitionTime:2020-04-25 21:32:10 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 25 21:32:26.895: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-6311 /apis/apps/v1/namespaces/deployment-6311/replicasets/test-rollover-deployment-574d6dfbff 0953ab93-2c79-4402-b750-ec52732b1a64 11018507 2 2020-04-25 21:32:12 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 94e6f801-845a-436f-b0c1-f576646d28f8 0xc0025f3107 0xc0025f3108}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0025f3178 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 25 21:32:26.895: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 25 21:32:26.895: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6311 /apis/apps/v1/namespaces/deployment-6311/replicasets/test-rollover-controller 67ee0062-cc95-45f5-a1dd-b39d995bf075 11018517 2 2020-04-25 21:32:03 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 94e6f801-845a-436f-b0c1-f576646d28f8 0xc0025f301f 0xc0025f3030}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0025f3098 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 25 21:32:26.895: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-6311 /apis/apps/v1/namespaces/deployment-6311/replicasets/test-rollover-deployment-f6c94f66c 86c4670a-2ea3-4243-9dd5-cccf9c2d1ed1 11018461 2 2020-04-25 21:32:10 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 94e6f801-845a-436f-b0c1-f576646d28f8 0xc0025f31e0 0xc0025f31e1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0025f3258 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 25 21:32:26.898: INFO: Pod "test-rollover-deployment-574d6dfbff-296lz" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-296lz test-rollover-deployment-574d6dfbff- deployment-6311 /api/v1/namespaces/deployment-6311/pods/test-rollover-deployment-574d6dfbff-296lz cad01725-8f53-4325-bb98-d03f9b87c96b 11018475 0 2020-04-25 21:32:12 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 0953ab93-2c79-4402-b750-ec52732b1a64 0xc0025f3797 0xc0025f3798}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2qs6p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2qs6p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2qs6p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:32:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:32:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:32:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.227,StartTime:2020-04-25 21:32:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-25 21:32:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://79b9670abf23cab36c3418261ebe30d3fe17d158fcb7fc599ec854e04f322457,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.227,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:32:26.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6311" for this suite. • [SLOW TEST:23.161 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":98,"skipped":1526,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:32:26.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 25 21:32:26.955: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:32:34.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8654" for this suite. • [SLOW TEST:7.589 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":99,"skipped":1556,"failed":0} [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:32:34.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 25 21:32:38.587: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:32:38.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4915" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1556,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:32:38.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-6136 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6136 STEP: Deleting pre-stop pod Apr 25 21:32:51.864: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:32:51.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6136" for this suite. • [SLOW TEST:13.192 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":101,"skipped":1594,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:32:51.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 25 21:32:51.982: INFO: Waiting up to 5m0s for pod "downward-api-9493e829-557f-40f4-ab39-8d6aac0aed39" in namespace "downward-api-8379" to be "success or failure" Apr 25 21:32:51.985: INFO: Pod "downward-api-9493e829-557f-40f4-ab39-8d6aac0aed39": Phase="Pending", Reason="", readiness=false. Elapsed: 3.097575ms Apr 25 21:32:53.990: INFO: Pod "downward-api-9493e829-557f-40f4-ab39-8d6aac0aed39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007657183s Apr 25 21:32:55.994: INFO: Pod "downward-api-9493e829-557f-40f4-ab39-8d6aac0aed39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011902093s STEP: Saw pod success Apr 25 21:32:55.994: INFO: Pod "downward-api-9493e829-557f-40f4-ab39-8d6aac0aed39" satisfied condition "success or failure" Apr 25 21:32:55.998: INFO: Trying to get logs from node jerma-worker2 pod downward-api-9493e829-557f-40f4-ab39-8d6aac0aed39 container dapi-container: STEP: delete the pod Apr 25 21:32:56.015: INFO: Waiting for pod downward-api-9493e829-557f-40f4-ab39-8d6aac0aed39 to disappear Apr 25 21:32:56.020: INFO: Pod downward-api-9493e829-557f-40f4-ab39-8d6aac0aed39 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:32:56.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8379" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1605,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:32:56.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod Apr 25 21:32:56.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-7904 -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 25 21:32:56.191: INFO: stderr: "" Apr 25 21:32:56.191: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. Apr 25 21:32:56.191: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 25 21:32:56.191: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-7904" to be "running and ready, or succeeded" Apr 25 21:32:56.215: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 23.540078ms Apr 25 21:32:58.219: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027665064s Apr 25 21:33:00.224: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.032021672s Apr 25 21:33:00.224: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 25 21:33:00.224: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Apr 25 21:33:00.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7904' Apr 25 21:33:00.344: INFO: stderr: "" Apr 25 21:33:00.344: INFO: stdout: "I0425 21:32:58.654129 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/q8p 333\nI0425 21:32:58.854395 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/hz9f 592\nI0425 21:32:59.054296 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/cvfh 296\nI0425 21:32:59.254266 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/dl7d 238\nI0425 21:32:59.454398 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/vt9 445\nI0425 21:32:59.654296 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/wrh 323\nI0425 21:32:59.854297 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/xc7 359\nI0425 21:33:00.054303 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/bsfh 437\nI0425 21:33:00.254292 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/b79n 257\n" STEP: limiting log lines Apr 25 21:33:00.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7904 --tail=1' Apr 25 21:33:00.458: INFO: stderr: "" Apr 25 21:33:00.458: INFO: stdout: "I0425 21:33:00.254292 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/b79n 257\n" Apr 25 21:33:00.458: INFO: got output "I0425 21:33:00.254292 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/b79n 257\n" STEP: limiting log bytes Apr 25 21:33:00.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7904 --limit-bytes=1' Apr 25 21:33:00.566: INFO: stderr: "" Apr 25 21:33:00.566: INFO: stdout: "I" Apr 25 21:33:00.566: INFO: got output "I" STEP: exposing timestamps Apr 25 21:33:00.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7904 --tail=1 --timestamps' Apr 25 21:33:00.678: INFO: stderr: "" Apr 25 21:33:00.678: INFO: stdout: "2020-04-25T21:33:00.654431151Z I0425 21:33:00.654286 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/9fhh 242\n" Apr 25 21:33:00.678: INFO: got output "2020-04-25T21:33:00.654431151Z I0425 21:33:00.654286 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/9fhh 242\n" STEP: restricting to a time range Apr 25 21:33:03.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7904 --since=1s' Apr 25 21:33:03.295: INFO: stderr: "" Apr 25 21:33:03.295: INFO: stdout: "I0425 21:33:02.454339 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/p9t 470\nI0425 21:33:02.654283 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/5lv9 537\nI0425 21:33:02.854319 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/54x 584\nI0425 21:33:03.054369 1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/rhvz 280\nI0425 21:33:03.254315 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/xw7j 353\n" Apr 25 21:33:03.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-7904 --since=24h' Apr 25 21:33:03.397: INFO: stderr: "" Apr 25 21:33:03.397: INFO: stdout: "I0425 21:32:58.654129 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/kube-system/pods/q8p 333\nI0425 21:32:58.854395 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/hz9f 592\nI0425 21:32:59.054296 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/cvfh 296\nI0425 21:32:59.254266 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/dl7d 238\nI0425 21:32:59.454398 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/vt9 445\nI0425 21:32:59.654296 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/wrh 323\nI0425 21:32:59.854297 1 logs_generator.go:76] 6 GET /api/v1/namespaces/default/pods/xc7 359\nI0425 21:33:00.054303 1 logs_generator.go:76] 7 POST /api/v1/namespaces/kube-system/pods/bsfh 437\nI0425 21:33:00.254292 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/ns/pods/b79n 257\nI0425 21:33:00.454502 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/vb5 273\nI0425 21:33:00.654286 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/9fhh 242\nI0425 21:33:00.854286 1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/xbb 337\nI0425 21:33:01.054264 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/zdv 527\nI0425 21:33:01.254288 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/75rd 521\nI0425 21:33:01.454346 1 logs_generator.go:76] 14 POST /api/v1/namespaces/kube-system/pods/dwx 290\nI0425 21:33:01.654300 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/ns/pods/t5s 282\nI0425 21:33:01.854297 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/mcq 258\nI0425 21:33:02.054309 1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/7qm 292\nI0425 21:33:02.254341 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/4zst 456\nI0425 21:33:02.454339 1 logs_generator.go:76] 19 GET /api/v1/namespaces/kube-system/pods/p9t 470\nI0425 21:33:02.654283 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/ns/pods/5lv9 537\nI0425 21:33:02.854319 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/kube-system/pods/54x 584\nI0425 21:33:03.054369 1 logs_generator.go:76] 22 POST /api/v1/namespaces/kube-system/pods/rhvz 280\nI0425 21:33:03.254315 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/kube-system/pods/xw7j 353\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 Apr 25 21:33:03.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-7904' Apr 25 21:33:09.247: INFO: stderr: "" Apr 25 21:33:09.247: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:33:09.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7904" for this suite. • [SLOW TEST:13.226 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":103,"skipped":1606,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:33:09.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 25 21:33:14.441: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:33:14.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9782" for this suite. • [SLOW TEST:5.320 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":104,"skipped":1624,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:33:14.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:33:47.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3383" for this suite. • [SLOW TEST:32.997 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1632,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:33:47.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 21:33:48.321: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 21:33:50.332: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447228, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447228, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447228, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447228, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 21:33:53.366: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Apr 25 21:33:57.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-1666 to-be-attached-pod -i -c=container1' Apr 25 21:33:57.549: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:33:57.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1666" for this suite. STEP: Destroying namespace "webhook-1666-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.081 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":106,"skipped":1663,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:33:57.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:33:57.703: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-2691 I0425 21:33:57.725529 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2691, replica count: 1 I0425 21:33:58.775998 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0425 21:33:59.776206 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0425 21:34:00.776462 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0425 21:34:01.776668 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 25 21:34:01.906: INFO: Created: latency-svc-txgls Apr 25 21:34:01.921: INFO: Got endpoints: latency-svc-txgls [45.020675ms] Apr 25 21:34:01.955: INFO: Created: latency-svc-2lt7f Apr 25 21:34:01.992: INFO: Got endpoints: latency-svc-2lt7f [70.491443ms] Apr 25 21:34:02.003: INFO: Created: latency-svc-5nnrp Apr 25 21:34:02.038: INFO: Got endpoints: latency-svc-5nnrp [116.009067ms] Apr 25 21:34:02.061: INFO: Created: latency-svc-bjw2m Apr 25 21:34:02.076: INFO: Got endpoints: latency-svc-bjw2m [154.60629ms] Apr 25 21:34:02.136: INFO: Created: latency-svc-vvmfb Apr 25 21:34:02.140: INFO: Got endpoints: latency-svc-vvmfb [218.434606ms] Apr 25 21:34:02.175: INFO: Created: latency-svc-6k25z Apr 25 21:34:02.193: INFO: Got endpoints: latency-svc-6k25z [271.681366ms] Apr 25 21:34:02.213: INFO: Created: latency-svc-s6fdz Apr 25 21:34:02.229: INFO: Got endpoints: latency-svc-s6fdz [307.694218ms] Apr 25 21:34:02.280: INFO: Created: latency-svc-4l428 Apr 25 21:34:02.298: INFO: Got endpoints: latency-svc-4l428 [376.464547ms] Apr 25 21:34:02.331: INFO: Created: latency-svc-b6t4g Apr 25 21:34:02.344: INFO: Got endpoints: latency-svc-b6t4g [421.788233ms] Apr 25 21:34:02.367: INFO: Created: latency-svc-z45gd Apr 25 21:34:02.429: INFO: Got endpoints: latency-svc-z45gd [507.684094ms] Apr 25 21:34:02.439: INFO: Created: latency-svc-gmdpd Apr 25 21:34:02.470: INFO: Got endpoints: latency-svc-gmdpd [548.380892ms] Apr 25 21:34:02.502: INFO: Created: latency-svc-wmhbv Apr 25 21:34:02.519: INFO: Got endpoints: latency-svc-wmhbv [596.878653ms] Apr 25 21:34:02.585: INFO: Created: latency-svc-f2dsz Apr 25 21:34:02.606: INFO: Got endpoints: latency-svc-f2dsz [684.31389ms] Apr 25 21:34:02.649: INFO: Created: latency-svc-rlf84 Apr 25 21:34:02.663: INFO: Got endpoints: latency-svc-rlf84 [740.877693ms] Apr 25 21:34:02.729: INFO: Created: latency-svc-j5wqm Apr 25 21:34:02.733: INFO: Got endpoints: latency-svc-j5wqm [810.75563ms] Apr 25 21:34:02.776: INFO: Created: latency-svc-t6bcb Apr 25 21:34:02.789: INFO: Got endpoints: latency-svc-t6bcb [867.726902ms] Apr 25 21:34:02.823: INFO: Created: latency-svc-bh4wg Apr 25 21:34:02.873: INFO: Got endpoints: latency-svc-bh4wg [880.3175ms] Apr 25 21:34:02.912: INFO: Created: latency-svc-s8vwr Apr 25 21:34:02.929: INFO: Got endpoints: latency-svc-s8vwr [891.337712ms] Apr 25 21:34:02.955: INFO: Created: latency-svc-pkpjf Apr 25 21:34:02.970: INFO: Got endpoints: latency-svc-pkpjf [893.596826ms] Apr 25 21:34:03.030: INFO: Created: latency-svc-w6trx Apr 25 21:34:03.036: INFO: Got endpoints: latency-svc-w6trx [895.859045ms] Apr 25 21:34:03.059: INFO: Created: latency-svc-d4vwq Apr 25 21:34:03.073: INFO: Got endpoints: latency-svc-d4vwq [879.307041ms] Apr 25 21:34:03.106: INFO: Created: latency-svc-kd5wx Apr 25 21:34:03.121: INFO: Got endpoints: latency-svc-kd5wx [891.411308ms] Apr 25 21:34:03.166: INFO: Created: latency-svc-2l7sr Apr 25 21:34:03.175: INFO: Got endpoints: latency-svc-2l7sr [876.408266ms] Apr 25 21:34:03.215: INFO: Created: latency-svc-6429g Apr 25 21:34:03.244: INFO: Got endpoints: latency-svc-6429g [900.83383ms] Apr 25 21:34:03.310: INFO: Created: latency-svc-pqc8q Apr 25 21:34:03.320: INFO: Got endpoints: latency-svc-pqc8q [890.135377ms] Apr 25 21:34:03.345: INFO: Created: latency-svc-7lmqp Apr 25 21:34:03.353: INFO: Got endpoints: latency-svc-7lmqp [883.039821ms] Apr 25 21:34:03.392: INFO: Created: latency-svc-7xcj9 Apr 25 21:34:03.409: INFO: Got endpoints: latency-svc-7xcj9 [890.248118ms] Apr 25 21:34:03.501: INFO: Created: latency-svc-rw7rc Apr 25 21:34:03.522: INFO: Got endpoints: latency-svc-rw7rc [916.035172ms] Apr 25 21:34:03.591: INFO: Created: latency-svc-54gmb Apr 25 21:34:03.611: INFO: Got endpoints: latency-svc-54gmb [948.219225ms] Apr 25 21:34:03.640: INFO: Created: latency-svc-6lpf8 Apr 25 21:34:03.655: INFO: Got endpoints: latency-svc-6lpf8 [922.25484ms] Apr 25 21:34:03.729: INFO: Created: latency-svc-g42wj Apr 25 21:34:03.738: INFO: Got endpoints: latency-svc-g42wj [948.974313ms] Apr 25 21:34:03.758: INFO: Created: latency-svc-cwn9q Apr 25 21:34:03.769: INFO: Got endpoints: latency-svc-cwn9q [896.004818ms] Apr 25 21:34:03.795: INFO: Created: latency-svc-l84vk Apr 25 21:34:03.826: INFO: Got endpoints: latency-svc-l84vk [897.228182ms] Apr 25 21:34:03.880: INFO: Created: latency-svc-jfgfb Apr 25 21:34:03.895: INFO: Got endpoints: latency-svc-jfgfb [925.484803ms] Apr 25 21:34:03.927: INFO: Created: latency-svc-zv5jd Apr 25 21:34:03.944: INFO: Got endpoints: latency-svc-zv5jd [908.006091ms] Apr 25 21:34:03.998: INFO: Created: latency-svc-2xhhl Apr 25 21:34:04.004: INFO: Got endpoints: latency-svc-2xhhl [931.305241ms] Apr 25 21:34:04.035: INFO: Created: latency-svc-fw99c Apr 25 21:34:04.046: INFO: Got endpoints: latency-svc-fw99c [924.953108ms] Apr 25 21:34:04.097: INFO: Created: latency-svc-ccprv Apr 25 21:34:04.136: INFO: Got endpoints: latency-svc-ccprv [961.434343ms] Apr 25 21:34:04.148: INFO: Created: latency-svc-f94jz Apr 25 21:34:04.167: INFO: Got endpoints: latency-svc-f94jz [922.244899ms] Apr 25 21:34:04.192: INFO: Created: latency-svc-wgl4r Apr 25 21:34:04.203: INFO: Got endpoints: latency-svc-wgl4r [883.498691ms] Apr 25 21:34:04.227: INFO: Created: latency-svc-bqr4m Apr 25 21:34:04.268: INFO: Got endpoints: latency-svc-bqr4m [914.302767ms] Apr 25 21:34:04.300: INFO: Created: latency-svc-25wvp Apr 25 21:34:04.317: INFO: Got endpoints: latency-svc-25wvp [908.447145ms] Apr 25 21:34:04.349: INFO: Created: latency-svc-s8ssg Apr 25 21:34:04.405: INFO: Got endpoints: latency-svc-s8ssg [883.493827ms] Apr 25 21:34:04.448: INFO: Created: latency-svc-8cfl5 Apr 25 21:34:04.480: INFO: Got endpoints: latency-svc-8cfl5 [869.249604ms] Apr 25 21:34:04.567: INFO: Created: latency-svc-lp4xg Apr 25 21:34:04.607: INFO: Got endpoints: latency-svc-lp4xg [951.693618ms] Apr 25 21:34:04.827: INFO: Created: latency-svc-td9m7 Apr 25 21:34:04.927: INFO: Got endpoints: latency-svc-td9m7 [1.188905307s] Apr 25 21:34:04.985: INFO: Created: latency-svc-kfnt6 Apr 25 21:34:05.001: INFO: Got endpoints: latency-svc-kfnt6 [1.232440715s] Apr 25 21:34:05.083: INFO: Created: latency-svc-v5nlq Apr 25 21:34:05.099: INFO: Got endpoints: latency-svc-v5nlq [1.27261309s] Apr 25 21:34:05.159: INFO: Created: latency-svc-rt4fh Apr 25 21:34:05.346: INFO: Got endpoints: latency-svc-rt4fh [1.450877266s] Apr 25 21:34:05.350: INFO: Created: latency-svc-hhrpw Apr 25 21:34:05.380: INFO: Got endpoints: latency-svc-hhrpw [1.436287025s] Apr 25 21:34:05.433: INFO: Created: latency-svc-6f989 Apr 25 21:34:05.473: INFO: Got endpoints: latency-svc-6f989 [1.469204168s] Apr 25 21:34:05.511: INFO: Created: latency-svc-bgj52 Apr 25 21:34:05.524: INFO: Got endpoints: latency-svc-bgj52 [1.478517771s] Apr 25 21:34:05.553: INFO: Created: latency-svc-ndrp8 Apr 25 21:34:05.621: INFO: Got endpoints: latency-svc-ndrp8 [1.484936259s] Apr 25 21:34:05.645: INFO: Created: latency-svc-6jn5b Apr 25 21:34:05.663: INFO: Got endpoints: latency-svc-6jn5b [1.496016888s] Apr 25 21:34:05.703: INFO: Created: latency-svc-mgkxd Apr 25 21:34:05.717: INFO: Got endpoints: latency-svc-mgkxd [1.513914896s] Apr 25 21:34:05.825: INFO: Created: latency-svc-r6b6f Apr 25 21:34:05.837: INFO: Got endpoints: latency-svc-r6b6f [1.569529879s] Apr 25 21:34:05.861: INFO: Created: latency-svc-rjnf9 Apr 25 21:34:05.885: INFO: Got endpoints: latency-svc-rjnf9 [1.567252082s] Apr 25 21:34:05.950: INFO: Created: latency-svc-ck6qk Apr 25 21:34:05.973: INFO: Got endpoints: latency-svc-ck6qk [1.567540731s] Apr 25 21:34:05.974: INFO: Created: latency-svc-8s26m Apr 25 21:34:05.988: INFO: Got endpoints: latency-svc-8s26m [1.507612029s] Apr 25 21:34:06.009: INFO: Created: latency-svc-mgn67 Apr 25 21:34:06.033: INFO: Got endpoints: latency-svc-mgn67 [1.426425774s] Apr 25 21:34:06.106: INFO: Created: latency-svc-7rc2h Apr 25 21:34:06.111: INFO: Got endpoints: latency-svc-7rc2h [1.183939521s] Apr 25 21:34:06.137: INFO: Created: latency-svc-rdtfk Apr 25 21:34:06.151: INFO: Got endpoints: latency-svc-rdtfk [1.149452946s] Apr 25 21:34:06.177: INFO: Created: latency-svc-w4kmq Apr 25 21:34:06.201: INFO: Got endpoints: latency-svc-w4kmq [1.101508033s] Apr 25 21:34:06.258: INFO: Created: latency-svc-6p2nq Apr 25 21:34:06.265: INFO: Got endpoints: latency-svc-6p2nq [918.422714ms] Apr 25 21:34:06.287: INFO: Created: latency-svc-rqnjr Apr 25 21:34:06.302: INFO: Got endpoints: latency-svc-rqnjr [921.237354ms] Apr 25 21:34:06.323: INFO: Created: latency-svc-ghqh8 Apr 25 21:34:06.353: INFO: Got endpoints: latency-svc-ghqh8 [879.57758ms] Apr 25 21:34:06.400: INFO: Created: latency-svc-7pczf Apr 25 21:34:06.404: INFO: Got endpoints: latency-svc-7pczf [879.163943ms] Apr 25 21:34:06.448: INFO: Created: latency-svc-d9s4w Apr 25 21:34:06.495: INFO: Got endpoints: latency-svc-d9s4w [873.387288ms] Apr 25 21:34:06.610: INFO: Created: latency-svc-dhtmd Apr 25 21:34:06.633: INFO: Got endpoints: latency-svc-dhtmd [970.074186ms] Apr 25 21:34:06.671: INFO: Created: latency-svc-78pl4 Apr 25 21:34:06.687: INFO: Got endpoints: latency-svc-78pl4 [969.479984ms] Apr 25 21:34:06.707: INFO: Created: latency-svc-xg257 Apr 25 21:34:06.741: INFO: Got endpoints: latency-svc-xg257 [903.879731ms] Apr 25 21:34:06.753: INFO: Created: latency-svc-mm44s Apr 25 21:34:06.771: INFO: Got endpoints: latency-svc-mm44s [886.404689ms] Apr 25 21:34:06.801: INFO: Created: latency-svc-n99s5 Apr 25 21:34:06.813: INFO: Got endpoints: latency-svc-n99s5 [839.778884ms] Apr 25 21:34:06.839: INFO: Created: latency-svc-f7gg8 Apr 25 21:34:06.879: INFO: Got endpoints: latency-svc-f7gg8 [890.89069ms] Apr 25 21:34:06.893: INFO: Created: latency-svc-b7p8c Apr 25 21:34:06.918: INFO: Got endpoints: latency-svc-b7p8c [884.695573ms] Apr 25 21:34:06.957: INFO: Created: latency-svc-sg6j9 Apr 25 21:34:06.970: INFO: Got endpoints: latency-svc-sg6j9 [858.091361ms] Apr 25 21:34:07.028: INFO: Created: latency-svc-l7xc5 Apr 25 21:34:07.031: INFO: Got endpoints: latency-svc-l7xc5 [880.29317ms] Apr 25 21:34:07.079: INFO: Created: latency-svc-flw9k Apr 25 21:34:07.096: INFO: Got endpoints: latency-svc-flw9k [895.689491ms] Apr 25 21:34:07.115: INFO: Created: latency-svc-ctv4q Apr 25 21:34:07.184: INFO: Got endpoints: latency-svc-ctv4q [919.162911ms] Apr 25 21:34:07.187: INFO: Created: latency-svc-6nksb Apr 25 21:34:07.194: INFO: Got endpoints: latency-svc-6nksb [891.986005ms] Apr 25 21:34:07.219: INFO: Created: latency-svc-9tdq9 Apr 25 21:34:07.229: INFO: Got endpoints: latency-svc-9tdq9 [876.080802ms] Apr 25 21:34:07.252: INFO: Created: latency-svc-c2n7z Apr 25 21:34:07.259: INFO: Got endpoints: latency-svc-c2n7z [855.683038ms] Apr 25 21:34:07.283: INFO: Created: latency-svc-gb268 Apr 25 21:34:07.357: INFO: Got endpoints: latency-svc-gb268 [862.753489ms] Apr 25 21:34:07.401: INFO: Created: latency-svc-s2bgj Apr 25 21:34:07.416: INFO: Got endpoints: latency-svc-s2bgj [782.710649ms] Apr 25 21:34:07.438: INFO: Created: latency-svc-v294j Apr 25 21:34:07.446: INFO: Got endpoints: latency-svc-v294j [759.616239ms] Apr 25 21:34:07.496: INFO: Created: latency-svc-lt59b Apr 25 21:34:07.500: INFO: Got endpoints: latency-svc-lt59b [759.213264ms] Apr 25 21:34:07.529: INFO: Created: latency-svc-ksdjk Apr 25 21:34:07.549: INFO: Got endpoints: latency-svc-ksdjk [777.64032ms] Apr 25 21:34:07.572: INFO: Created: latency-svc-zfvnx Apr 25 21:34:07.645: INFO: Got endpoints: latency-svc-zfvnx [832.227222ms] Apr 25 21:34:07.671: INFO: Created: latency-svc-bcq9b Apr 25 21:34:07.681: INFO: Got endpoints: latency-svc-bcq9b [802.311706ms] Apr 25 21:34:07.711: INFO: Created: latency-svc-7fs8k Apr 25 21:34:07.717: INFO: Got endpoints: latency-svc-7fs8k [799.333319ms] Apr 25 21:34:07.745: INFO: Created: latency-svc-g4v2t Apr 25 21:34:07.839: INFO: Got endpoints: latency-svc-g4v2t [869.32471ms] Apr 25 21:34:07.840: INFO: Created: latency-svc-gkr8p Apr 25 21:34:07.856: INFO: Got endpoints: latency-svc-gkr8p [824.846359ms] Apr 25 21:34:07.881: INFO: Created: latency-svc-w2bgx Apr 25 21:34:07.892: INFO: Got endpoints: latency-svc-w2bgx [795.42152ms] Apr 25 21:34:07.968: INFO: Created: latency-svc-qfkhh Apr 25 21:34:07.972: INFO: Got endpoints: latency-svc-qfkhh [787.615272ms] Apr 25 21:34:08.013: INFO: Created: latency-svc-t48zm Apr 25 21:34:08.043: INFO: Got endpoints: latency-svc-t48zm [848.919201ms] Apr 25 21:34:08.068: INFO: Created: latency-svc-rbtgk Apr 25 21:34:08.118: INFO: Got endpoints: latency-svc-rbtgk [889.065878ms] Apr 25 21:34:08.188: INFO: Created: latency-svc-9s8w6 Apr 25 21:34:08.206: INFO: Got endpoints: latency-svc-9s8w6 [946.448087ms] Apr 25 21:34:08.276: INFO: Created: latency-svc-wjsjt Apr 25 21:34:08.277: INFO: Got endpoints: latency-svc-wjsjt [919.586055ms] Apr 25 21:34:08.309: INFO: Created: latency-svc-p2cpd Apr 25 21:34:08.326: INFO: Got endpoints: latency-svc-p2cpd [910.372622ms] Apr 25 21:34:08.351: INFO: Created: latency-svc-z5mlq Apr 25 21:34:08.418: INFO: Got endpoints: latency-svc-z5mlq [971.292821ms] Apr 25 21:34:08.463: INFO: Created: latency-svc-zp7lh Apr 25 21:34:08.483: INFO: Got endpoints: latency-svc-zp7lh [982.623734ms] Apr 25 21:34:08.507: INFO: Created: latency-svc-qv7wd Apr 25 21:34:08.543: INFO: Got endpoints: latency-svc-qv7wd [993.993237ms] Apr 25 21:34:08.548: INFO: Created: latency-svc-jmw9d Apr 25 21:34:08.589: INFO: Got endpoints: latency-svc-jmw9d [943.409427ms] Apr 25 21:34:08.619: INFO: Created: latency-svc-s9lm6 Apr 25 21:34:08.633: INFO: Got endpoints: latency-svc-s9lm6 [952.1538ms] Apr 25 21:34:08.681: INFO: Created: latency-svc-t27xq Apr 25 21:34:08.711: INFO: Got endpoints: latency-svc-t27xq [993.262306ms] Apr 25 21:34:08.711: INFO: Created: latency-svc-jbcrs Apr 25 21:34:08.725: INFO: Got endpoints: latency-svc-jbcrs [886.123454ms] Apr 25 21:34:08.747: INFO: Created: latency-svc-4wqkl Apr 25 21:34:08.760: INFO: Got endpoints: latency-svc-4wqkl [903.772692ms] Apr 25 21:34:08.825: INFO: Created: latency-svc-5skg6 Apr 25 21:34:08.853: INFO: Got endpoints: latency-svc-5skg6 [961.344508ms] Apr 25 21:34:08.855: INFO: Created: latency-svc-tm86q Apr 25 21:34:08.868: INFO: Got endpoints: latency-svc-tm86q [896.578664ms] Apr 25 21:34:08.889: INFO: Created: latency-svc-qdd8q Apr 25 21:34:08.905: INFO: Got endpoints: latency-svc-qdd8q [861.953841ms] Apr 25 21:34:08.978: INFO: Created: latency-svc-jnqmj Apr 25 21:34:08.983: INFO: Got endpoints: latency-svc-jnqmj [864.583335ms] Apr 25 21:34:09.021: INFO: Created: latency-svc-zrwhk Apr 25 21:34:09.037: INFO: Got endpoints: latency-svc-zrwhk [830.951725ms] Apr 25 21:34:09.063: INFO: Created: latency-svc-wt8fw Apr 25 21:34:09.136: INFO: Got endpoints: latency-svc-wt8fw [858.783011ms] Apr 25 21:34:09.152: INFO: Created: latency-svc-dg5z8 Apr 25 21:34:09.157: INFO: Got endpoints: latency-svc-dg5z8 [831.146008ms] Apr 25 21:34:09.178: INFO: Created: latency-svc-v4rq6 Apr 25 21:34:09.188: INFO: Got endpoints: latency-svc-v4rq6 [770.010426ms] Apr 25 21:34:09.210: INFO: Created: latency-svc-f2285 Apr 25 21:34:09.218: INFO: Got endpoints: latency-svc-f2285 [735.092032ms] Apr 25 21:34:09.280: INFO: Created: latency-svc-7lxbb Apr 25 21:34:09.283: INFO: Got endpoints: latency-svc-7lxbb [740.484914ms] Apr 25 21:34:09.327: INFO: Created: latency-svc-tdwfc Apr 25 21:34:09.365: INFO: Got endpoints: latency-svc-tdwfc [775.942579ms] Apr 25 21:34:09.424: INFO: Created: latency-svc-4mnt8 Apr 25 21:34:09.432: INFO: Got endpoints: latency-svc-4mnt8 [798.648635ms] Apr 25 21:34:09.449: INFO: Created: latency-svc-xhhqh Apr 25 21:34:09.459: INFO: Got endpoints: latency-svc-xhhqh [748.619617ms] Apr 25 21:34:09.489: INFO: Created: latency-svc-ckwl8 Apr 25 21:34:09.514: INFO: Got endpoints: latency-svc-ckwl8 [788.31716ms] Apr 25 21:34:09.580: INFO: Created: latency-svc-5ttpb Apr 25 21:34:09.598: INFO: Got endpoints: latency-svc-5ttpb [838.119195ms] Apr 25 21:34:09.630: INFO: Created: latency-svc-fbbzc Apr 25 21:34:09.652: INFO: Got endpoints: latency-svc-fbbzc [799.122058ms] Apr 25 21:34:09.699: INFO: Created: latency-svc-s8sf2 Apr 25 21:34:09.706: INFO: Got endpoints: latency-svc-s8sf2 [837.438479ms] Apr 25 21:34:09.759: INFO: Created: latency-svc-xc957 Apr 25 21:34:09.772: INFO: Got endpoints: latency-svc-xc957 [867.364156ms] Apr 25 21:34:09.838: INFO: Created: latency-svc-vvqgp Apr 25 21:34:09.844: INFO: Got endpoints: latency-svc-vvqgp [861.302995ms] Apr 25 21:34:09.868: INFO: Created: latency-svc-8prf2 Apr 25 21:34:09.881: INFO: Got endpoints: latency-svc-8prf2 [843.67042ms] Apr 25 21:34:09.903: INFO: Created: latency-svc-2vhqq Apr 25 21:34:09.917: INFO: Got endpoints: latency-svc-2vhqq [781.27593ms] Apr 25 21:34:09.980: INFO: Created: latency-svc-lj6kp Apr 25 21:34:09.995: INFO: Got endpoints: latency-svc-lj6kp [837.704643ms] Apr 25 21:34:10.017: INFO: Created: latency-svc-nmnj4 Apr 25 21:34:10.043: INFO: Got endpoints: latency-svc-nmnj4 [854.886735ms] Apr 25 21:34:10.124: INFO: Created: latency-svc-6l8fp Apr 25 21:34:10.155: INFO: Got endpoints: latency-svc-6l8fp [936.50251ms] Apr 25 21:34:10.155: INFO: Created: latency-svc-vpjzs Apr 25 21:34:10.176: INFO: Got endpoints: latency-svc-vpjzs [892.731451ms] Apr 25 21:34:10.197: INFO: Created: latency-svc-wgbqm Apr 25 21:34:10.212: INFO: Got endpoints: latency-svc-wgbqm [847.136368ms] Apr 25 21:34:10.268: INFO: Created: latency-svc-k4svx Apr 25 21:34:10.278: INFO: Got endpoints: latency-svc-k4svx [846.196917ms] Apr 25 21:34:10.301: INFO: Created: latency-svc-ft5xx Apr 25 21:34:10.309: INFO: Got endpoints: latency-svc-ft5xx [849.385441ms] Apr 25 21:34:10.330: INFO: Created: latency-svc-n96dj Apr 25 21:34:10.339: INFO: Got endpoints: latency-svc-n96dj [825.360262ms] Apr 25 21:34:10.411: INFO: Created: latency-svc-d8qxd Apr 25 21:34:10.414: INFO: Got endpoints: latency-svc-d8qxd [816.316628ms] Apr 25 21:34:10.448: INFO: Created: latency-svc-868sr Apr 25 21:34:10.466: INFO: Got endpoints: latency-svc-868sr [813.611314ms] Apr 25 21:34:10.486: INFO: Created: latency-svc-4fqld Apr 25 21:34:10.549: INFO: Got endpoints: latency-svc-4fqld [843.114061ms] Apr 25 21:34:10.552: INFO: Created: latency-svc-nqkcw Apr 25 21:34:10.568: INFO: Got endpoints: latency-svc-nqkcw [795.852972ms] Apr 25 21:34:10.589: INFO: Created: latency-svc-b6xct Apr 25 21:34:10.598: INFO: Got endpoints: latency-svc-b6xct [753.727527ms] Apr 25 21:34:10.623: INFO: Created: latency-svc-d6lv6 Apr 25 21:34:10.641: INFO: Got endpoints: latency-svc-d6lv6 [760.697313ms] Apr 25 21:34:10.675: INFO: Created: latency-svc-lplwz Apr 25 21:34:10.679: INFO: Got endpoints: latency-svc-lplwz [761.249715ms] Apr 25 21:34:10.715: INFO: Created: latency-svc-l4cpf Apr 25 21:34:10.733: INFO: Got endpoints: latency-svc-l4cpf [737.655302ms] Apr 25 21:34:10.769: INFO: Created: latency-svc-28g78 Apr 25 21:34:10.806: INFO: Got endpoints: latency-svc-28g78 [763.649982ms] Apr 25 21:34:10.814: INFO: Created: latency-svc-9fmpt Apr 25 21:34:10.838: INFO: Got endpoints: latency-svc-9fmpt [683.614916ms] Apr 25 21:34:10.868: INFO: Created: latency-svc-mcbmt Apr 25 21:34:10.877: INFO: Got endpoints: latency-svc-mcbmt [700.995801ms] Apr 25 21:34:10.906: INFO: Created: latency-svc-c7wm5 Apr 25 21:34:11.009: INFO: Got endpoints: latency-svc-c7wm5 [796.799418ms] Apr 25 21:34:11.013: INFO: Created: latency-svc-6n485 Apr 25 21:34:11.036: INFO: Got endpoints: latency-svc-6n485 [757.691549ms] Apr 25 21:34:11.073: INFO: Created: latency-svc-n6gvc Apr 25 21:34:11.088: INFO: Got endpoints: latency-svc-n6gvc [779.541344ms] Apr 25 21:34:11.143: INFO: Created: latency-svc-wvrpc Apr 25 21:34:11.164: INFO: Got endpoints: latency-svc-wvrpc [825.265448ms] Apr 25 21:34:11.166: INFO: Created: latency-svc-nfxrn Apr 25 21:34:11.179: INFO: Got endpoints: latency-svc-nfxrn [764.063929ms] Apr 25 21:34:11.200: INFO: Created: latency-svc-lxss7 Apr 25 21:34:11.215: INFO: Got endpoints: latency-svc-lxss7 [748.615446ms] Apr 25 21:34:11.236: INFO: Created: latency-svc-hn298 Apr 25 21:34:11.274: INFO: Got endpoints: latency-svc-hn298 [724.605137ms] Apr 25 21:34:11.306: INFO: Created: latency-svc-fxm94 Apr 25 21:34:11.324: INFO: Got endpoints: latency-svc-fxm94 [755.675091ms] Apr 25 21:34:11.349: INFO: Created: latency-svc-g9nmn Apr 25 21:34:11.360: INFO: Got endpoints: latency-svc-g9nmn [761.608809ms] Apr 25 21:34:11.412: INFO: Created: latency-svc-vxlfz Apr 25 21:34:11.426: INFO: Got endpoints: latency-svc-vxlfz [784.791458ms] Apr 25 21:34:11.453: INFO: Created: latency-svc-4mdn4 Apr 25 21:34:11.468: INFO: Got endpoints: latency-svc-4mdn4 [789.556956ms] Apr 25 21:34:11.486: INFO: Created: latency-svc-996q9 Apr 25 21:34:11.505: INFO: Got endpoints: latency-svc-996q9 [772.049993ms] Apr 25 21:34:11.556: INFO: Created: latency-svc-sw2wf Apr 25 21:34:11.577: INFO: Got endpoints: latency-svc-sw2wf [770.096781ms] Apr 25 21:34:11.632: INFO: Created: latency-svc-cc7wq Apr 25 21:34:11.649: INFO: Got endpoints: latency-svc-cc7wq [810.461381ms] Apr 25 21:34:11.705: INFO: Created: latency-svc-jswvj Apr 25 21:34:11.732: INFO: Got endpoints: latency-svc-jswvj [855.122163ms] Apr 25 21:34:11.732: INFO: Created: latency-svc-n9xtm Apr 25 21:34:11.758: INFO: Got endpoints: latency-svc-n9xtm [748.752313ms] Apr 25 21:34:11.844: INFO: Created: latency-svc-wxrd6 Apr 25 21:34:11.846: INFO: Got endpoints: latency-svc-wxrd6 [810.164885ms] Apr 25 21:34:11.872: INFO: Created: latency-svc-pmb6z Apr 25 21:34:11.891: INFO: Got endpoints: latency-svc-pmb6z [802.335243ms] Apr 25 21:34:11.952: INFO: Created: latency-svc-j65hx Apr 25 21:34:12.000: INFO: Got endpoints: latency-svc-j65hx [835.830737ms] Apr 25 21:34:12.002: INFO: Created: latency-svc-dqvvg Apr 25 21:34:12.010: INFO: Got endpoints: latency-svc-dqvvg [831.419465ms] Apr 25 21:34:12.046: INFO: Created: latency-svc-hkwng Apr 25 21:34:12.064: INFO: Got endpoints: latency-svc-hkwng [849.548728ms] Apr 25 21:34:12.094: INFO: Created: latency-svc-grhsh Apr 25 21:34:12.136: INFO: Got endpoints: latency-svc-grhsh [862.265792ms] Apr 25 21:34:12.146: INFO: Created: latency-svc-8lbz6 Apr 25 21:34:12.162: INFO: Got endpoints: latency-svc-8lbz6 [838.207894ms] Apr 25 21:34:12.188: INFO: Created: latency-svc-6xqlv Apr 25 21:34:12.197: INFO: Got endpoints: latency-svc-6xqlv [837.610186ms] Apr 25 21:34:12.226: INFO: Created: latency-svc-9tthr Apr 25 21:34:12.280: INFO: Got endpoints: latency-svc-9tthr [853.717474ms] Apr 25 21:34:12.298: INFO: Created: latency-svc-kzgx7 Apr 25 21:34:12.312: INFO: Got endpoints: latency-svc-kzgx7 [843.437089ms] Apr 25 21:34:12.332: INFO: Created: latency-svc-6m56q Apr 25 21:34:12.348: INFO: Got endpoints: latency-svc-6m56q [842.925214ms] Apr 25 21:34:12.442: INFO: Created: latency-svc-s6jbv Apr 25 21:34:12.446: INFO: Got endpoints: latency-svc-s6jbv [868.717044ms] Apr 25 21:34:12.491: INFO: Created: latency-svc-vl7ps Apr 25 21:34:12.505: INFO: Got endpoints: latency-svc-vl7ps [855.717337ms] Apr 25 21:34:12.524: INFO: Created: latency-svc-vf5ph Apr 25 21:34:12.540: INFO: Got endpoints: latency-svc-vf5ph [807.749741ms] Apr 25 21:34:12.590: INFO: Created: latency-svc-dj799 Apr 25 21:34:12.622: INFO: Got endpoints: latency-svc-dj799 [864.294102ms] Apr 25 21:34:12.652: INFO: Created: latency-svc-nbfjd Apr 25 21:34:12.667: INFO: Got endpoints: latency-svc-nbfjd [820.924244ms] Apr 25 21:34:12.711: INFO: Created: latency-svc-bqf7t Apr 25 21:34:12.713: INFO: Got endpoints: latency-svc-bqf7t [822.644794ms] Apr 25 21:34:12.740: INFO: Created: latency-svc-kmmgm Apr 25 21:34:12.758: INFO: Got endpoints: latency-svc-kmmgm [757.463245ms] Apr 25 21:34:12.806: INFO: Created: latency-svc-kh49r Apr 25 21:34:12.866: INFO: Got endpoints: latency-svc-kh49r [856.145617ms] Apr 25 21:34:12.869: INFO: Created: latency-svc-qws86 Apr 25 21:34:12.878: INFO: Got endpoints: latency-svc-qws86 [813.614309ms] Apr 25 21:34:12.904: INFO: Created: latency-svc-8xwrl Apr 25 21:34:12.921: INFO: Got endpoints: latency-svc-8xwrl [784.962824ms] Apr 25 21:34:12.942: INFO: Created: latency-svc-wjhqn Apr 25 21:34:12.950: INFO: Got endpoints: latency-svc-wjhqn [788.166979ms] Apr 25 21:34:13.000: INFO: Created: latency-svc-k9dlp Apr 25 21:34:13.011: INFO: Got endpoints: latency-svc-k9dlp [813.483797ms] Apr 25 21:34:13.042: INFO: Created: latency-svc-svqxj Apr 25 21:34:13.066: INFO: Got endpoints: latency-svc-svqxj [785.755311ms] Apr 25 21:34:13.091: INFO: Created: latency-svc-zp44v Apr 25 21:34:13.130: INFO: Got endpoints: latency-svc-zp44v [818.736999ms] Apr 25 21:34:13.160: INFO: Created: latency-svc-f4nlv Apr 25 21:34:13.173: INFO: Got endpoints: latency-svc-f4nlv [825.451443ms] Apr 25 21:34:13.196: INFO: Created: latency-svc-4xzs4 Apr 25 21:34:13.210: INFO: Got endpoints: latency-svc-4xzs4 [764.612156ms] Apr 25 21:34:13.281: INFO: Created: latency-svc-fvrkv Apr 25 21:34:13.299: INFO: Got endpoints: latency-svc-fvrkv [793.932812ms] Apr 25 21:34:13.318: INFO: Created: latency-svc-mr2sg Apr 25 21:34:13.334: INFO: Got endpoints: latency-svc-mr2sg [794.216705ms] Apr 25 21:34:13.366: INFO: Created: latency-svc-r8zdw Apr 25 21:34:13.411: INFO: Got endpoints: latency-svc-r8zdw [789.194166ms] Apr 25 21:34:13.424: INFO: Created: latency-svc-nt4d9 Apr 25 21:34:13.448: INFO: Got endpoints: latency-svc-nt4d9 [780.830433ms] Apr 25 21:34:13.478: INFO: Created: latency-svc-lxgmv Apr 25 21:34:13.510: INFO: Got endpoints: latency-svc-lxgmv [796.291987ms] Apr 25 21:34:13.567: INFO: Created: latency-svc-2xf2v Apr 25 21:34:13.582: INFO: Got endpoints: latency-svc-2xf2v [823.830386ms] Apr 25 21:34:13.610: INFO: Created: latency-svc-5bvfr Apr 25 21:34:13.624: INFO: Got endpoints: latency-svc-5bvfr [758.203433ms] Apr 25 21:34:13.735: INFO: Created: latency-svc-brj5q Apr 25 21:34:13.739: INFO: Got endpoints: latency-svc-brj5q [860.967889ms] Apr 25 21:34:13.775: INFO: Created: latency-svc-s7nks Apr 25 21:34:13.785: INFO: Got endpoints: latency-svc-s7nks [863.557557ms] Apr 25 21:34:13.820: INFO: Created: latency-svc-sml54 Apr 25 21:34:13.866: INFO: Got endpoints: latency-svc-sml54 [916.244558ms] Apr 25 21:34:13.892: INFO: Created: latency-svc-nh7h9 Apr 25 21:34:13.908: INFO: Got endpoints: latency-svc-nh7h9 [897.51958ms] Apr 25 21:34:13.908: INFO: Latencies: [70.491443ms 116.009067ms 154.60629ms 218.434606ms 271.681366ms 307.694218ms 376.464547ms 421.788233ms 507.684094ms 548.380892ms 596.878653ms 683.614916ms 684.31389ms 700.995801ms 724.605137ms 735.092032ms 737.655302ms 740.484914ms 740.877693ms 748.615446ms 748.619617ms 748.752313ms 753.727527ms 755.675091ms 757.463245ms 757.691549ms 758.203433ms 759.213264ms 759.616239ms 760.697313ms 761.249715ms 761.608809ms 763.649982ms 764.063929ms 764.612156ms 770.010426ms 770.096781ms 772.049993ms 775.942579ms 777.64032ms 779.541344ms 780.830433ms 781.27593ms 782.710649ms 784.791458ms 784.962824ms 785.755311ms 787.615272ms 788.166979ms 788.31716ms 789.194166ms 789.556956ms 793.932812ms 794.216705ms 795.42152ms 795.852972ms 796.291987ms 796.799418ms 798.648635ms 799.122058ms 799.333319ms 802.311706ms 802.335243ms 807.749741ms 810.164885ms 810.461381ms 810.75563ms 813.483797ms 813.611314ms 813.614309ms 816.316628ms 818.736999ms 820.924244ms 822.644794ms 823.830386ms 824.846359ms 825.265448ms 825.360262ms 825.451443ms 830.951725ms 831.146008ms 831.419465ms 832.227222ms 835.830737ms 837.438479ms 837.610186ms 837.704643ms 838.119195ms 838.207894ms 839.778884ms 842.925214ms 843.114061ms 843.437089ms 843.67042ms 846.196917ms 847.136368ms 848.919201ms 849.385441ms 849.548728ms 853.717474ms 854.886735ms 855.122163ms 855.683038ms 855.717337ms 856.145617ms 858.091361ms 858.783011ms 860.967889ms 861.302995ms 861.953841ms 862.265792ms 862.753489ms 863.557557ms 864.294102ms 864.583335ms 867.364156ms 867.726902ms 868.717044ms 869.249604ms 869.32471ms 873.387288ms 876.080802ms 876.408266ms 879.163943ms 879.307041ms 879.57758ms 880.29317ms 880.3175ms 883.039821ms 883.493827ms 883.498691ms 884.695573ms 886.123454ms 886.404689ms 889.065878ms 890.135377ms 890.248118ms 890.89069ms 891.337712ms 891.411308ms 891.986005ms 892.731451ms 893.596826ms 895.689491ms 895.859045ms 896.004818ms 896.578664ms 897.228182ms 897.51958ms 900.83383ms 903.772692ms 903.879731ms 908.006091ms 908.447145ms 910.372622ms 914.302767ms 916.035172ms 916.244558ms 918.422714ms 919.162911ms 919.586055ms 921.237354ms 922.244899ms 922.25484ms 924.953108ms 925.484803ms 931.305241ms 936.50251ms 943.409427ms 946.448087ms 948.219225ms 948.974313ms 951.693618ms 952.1538ms 961.344508ms 961.434343ms 969.479984ms 970.074186ms 971.292821ms 982.623734ms 993.262306ms 993.993237ms 1.101508033s 1.149452946s 1.183939521s 1.188905307s 1.232440715s 1.27261309s 1.426425774s 1.436287025s 1.450877266s 1.469204168s 1.478517771s 1.484936259s 1.496016888s 1.507612029s 1.513914896s 1.567252082s 1.567540731s 1.569529879s] Apr 25 21:34:13.909: INFO: 50 %ile: 854.886735ms Apr 25 21:34:13.909: INFO: 90 %ile: 993.262306ms Apr 25 21:34:13.909: INFO: 99 %ile: 1.567540731s Apr 25 21:34:13.909: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:34:13.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-2691" for this suite. • [SLOW TEST:16.296 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":107,"skipped":1680,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:34:13.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components Apr 25 21:34:14.045: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend Apr 25 21:34:14.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3308' Apr 25 21:34:14.310: INFO: stderr: "" Apr 25 21:34:14.310: INFO: stdout: "service/agnhost-slave created\n" Apr 25 21:34:14.311: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend Apr 25 21:34:14.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3308' Apr 25 21:34:14.582: INFO: stderr: "" Apr 25 21:34:14.582: INFO: stdout: "service/agnhost-master created\n" Apr 25 21:34:14.582: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 25 21:34:14.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3308' Apr 25 21:34:14.862: INFO: stderr: "" Apr 25 21:34:14.862: INFO: stdout: "service/frontend created\n" Apr 25 21:34:14.862: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Apr 25 21:34:14.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3308' Apr 25 21:34:15.108: INFO: stderr: "" Apr 25 21:34:15.108: INFO: stdout: "deployment.apps/frontend created\n" Apr 25 21:34:15.108: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 25 21:34:15.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3308' Apr 25 21:34:15.411: INFO: stderr: "" Apr 25 21:34:15.411: INFO: stdout: "deployment.apps/agnhost-master created\n" Apr 25 21:34:15.411: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 25 21:34:15.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3308' Apr 25 21:34:15.643: INFO: stderr: "" Apr 25 21:34:15.643: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app Apr 25 21:34:15.643: INFO: Waiting for all frontend pods to be Running. Apr 25 21:34:25.693: INFO: Waiting for frontend to serve content. Apr 25 21:34:25.715: INFO: Trying to add a new entry to the guestbook. Apr 25 21:34:25.724: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 25 21:34:25.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3308' Apr 25 21:34:25.981: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 25 21:34:25.981: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources Apr 25 21:34:25.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3308' Apr 25 21:34:26.202: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 25 21:34:26.202: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 25 21:34:26.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3308' Apr 25 21:34:26.437: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 25 21:34:26.437: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 25 21:34:26.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3308' Apr 25 21:34:26.558: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 25 21:34:26.558: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 25 21:34:26.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3308' Apr 25 21:34:26.739: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 25 21:34:26.739: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources Apr 25 21:34:26.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3308' Apr 25 21:34:26.835: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 25 21:34:26.835: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:34:26.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3308" for this suite. • [SLOW TEST:12.949 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":108,"skipped":1687,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:34:26.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 21:34:28.651: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 21:34:30.766: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447268, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447268, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447268, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447268, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 21:34:33.879: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:34:34.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8694" for this suite. STEP: Destroying namespace "webhook-8694-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.113 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":109,"skipped":1690,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:34:35.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-c56073be-69a9-4d72-8f76-15f1103ebf04 STEP: Creating a pod to test consume configMaps Apr 25 21:34:35.511: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-60d8ee6b-895e-4848-9eb1-0df8b9e939c1" in namespace "projected-7768" to be "success or failure" Apr 25 21:34:35.540: INFO: Pod "pod-projected-configmaps-60d8ee6b-895e-4848-9eb1-0df8b9e939c1": Phase="Pending", Reason="", readiness=false. Elapsed: 28.864788ms Apr 25 21:34:37.568: INFO: Pod "pod-projected-configmaps-60d8ee6b-895e-4848-9eb1-0df8b9e939c1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057494546s Apr 25 21:34:39.615: INFO: Pod "pod-projected-configmaps-60d8ee6b-895e-4848-9eb1-0df8b9e939c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104665063s STEP: Saw pod success Apr 25 21:34:39.615: INFO: Pod "pod-projected-configmaps-60d8ee6b-895e-4848-9eb1-0df8b9e939c1" satisfied condition "success or failure" Apr 25 21:34:39.632: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-60d8ee6b-895e-4848-9eb1-0df8b9e939c1 container projected-configmap-volume-test: STEP: delete the pod Apr 25 21:34:39.712: INFO: Waiting for pod pod-projected-configmaps-60d8ee6b-895e-4848-9eb1-0df8b9e939c1 to disappear Apr 25 21:34:39.753: INFO: Pod pod-projected-configmaps-60d8ee6b-895e-4848-9eb1-0df8b9e939c1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:34:39.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7768" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":110,"skipped":1715,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:34:39.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:34:40.018: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-e3773d8d-ffa1-495e-9dbe-bb17e88780b9" in namespace "security-context-test-4646" to be "success or failure" Apr 25 21:34:40.054: INFO: Pod "alpine-nnp-false-e3773d8d-ffa1-495e-9dbe-bb17e88780b9": Phase="Pending", Reason="", readiness=false. Elapsed: 35.410407ms Apr 25 21:34:42.164: INFO: Pod "alpine-nnp-false-e3773d8d-ffa1-495e-9dbe-bb17e88780b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.145415879s Apr 25 21:34:44.208: INFO: Pod "alpine-nnp-false-e3773d8d-ffa1-495e-9dbe-bb17e88780b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.190126744s Apr 25 21:34:44.208: INFO: Pod "alpine-nnp-false-e3773d8d-ffa1-495e-9dbe-bb17e88780b9" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:34:44.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4646" for this suite. •{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":111,"skipped":1741,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:34:44.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 25 21:34:44.453: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ed83935-4b16-43ea-a6b5-11d215f41d76" in namespace "downward-api-8824" to be "success or failure" Apr 25 21:34:44.505: INFO: Pod "downwardapi-volume-3ed83935-4b16-43ea-a6b5-11d215f41d76": Phase="Pending", Reason="", readiness=false. Elapsed: 52.405883ms Apr 25 21:34:46.508: INFO: Pod "downwardapi-volume-3ed83935-4b16-43ea-a6b5-11d215f41d76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055248444s Apr 25 21:34:48.512: INFO: Pod "downwardapi-volume-3ed83935-4b16-43ea-a6b5-11d215f41d76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059243087s STEP: Saw pod success Apr 25 21:34:48.512: INFO: Pod "downwardapi-volume-3ed83935-4b16-43ea-a6b5-11d215f41d76" satisfied condition "success or failure" Apr 25 21:34:48.515: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3ed83935-4b16-43ea-a6b5-11d215f41d76 container client-container: STEP: delete the pod Apr 25 21:34:48.553: INFO: Waiting for pod downwardapi-volume-3ed83935-4b16-43ea-a6b5-11d215f41d76 to disappear Apr 25 21:34:48.573: INFO: Pod downwardapi-volume-3ed83935-4b16-43ea-a6b5-11d215f41d76 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:34:48.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8824" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1777,"failed":0} SSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:34:48.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-45169247-bd75-4d8c-bf94-d36cee4239b1 in namespace container-probe-2642 Apr 25 21:34:52.667: INFO: Started pod liveness-45169247-bd75-4d8c-bf94-d36cee4239b1 in namespace container-probe-2642 STEP: checking the pod's current state and verifying that restartCount is present Apr 25 21:34:52.670: INFO: Initial restart count of pod liveness-45169247-bd75-4d8c-bf94-d36cee4239b1 is 0 Apr 25 21:35:06.727: INFO: Restart count of pod container-probe-2642/liveness-45169247-bd75-4d8c-bf94-d36cee4239b1 is now 1 (14.056991763s elapsed) Apr 25 21:35:26.782: INFO: Restart count of pod container-probe-2642/liveness-45169247-bd75-4d8c-bf94-d36cee4239b1 is now 2 (34.112199124s elapsed) Apr 25 21:35:46.867: INFO: Restart count of pod container-probe-2642/liveness-45169247-bd75-4d8c-bf94-d36cee4239b1 is now 3 (54.196548743s elapsed) Apr 25 21:36:06.911: INFO: Restart count of pod container-probe-2642/liveness-45169247-bd75-4d8c-bf94-d36cee4239b1 is now 4 (1m14.240442584s elapsed) Apr 25 21:37:15.056: INFO: Restart count of pod container-probe-2642/liveness-45169247-bd75-4d8c-bf94-d36cee4239b1 is now 5 (2m22.386154696s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:37:15.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2642" for this suite. • [SLOW TEST:146.562 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1781,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:37:15.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 25 21:37:15.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6969' Apr 25 21:37:15.522: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 25 21:37:15.523: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 Apr 25 21:37:17.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-6969' Apr 25 21:37:17.829: INFO: stderr: "" Apr 25 21:37:17.829: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:37:17.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6969" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":114,"skipped":1866,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:37:17.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server Apr 25 21:37:18.014: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:37:18.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9758" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":115,"skipped":1868,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:37:18.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 25 21:37:18.827: INFO: Pod name wrapped-volume-race-0b8f3300-14c1-41b9-b4c0-26abbf8e03d2: Found 0 pods out of 5 Apr 25 21:37:23.835: INFO: Pod name wrapped-volume-race-0b8f3300-14c1-41b9-b4c0-26abbf8e03d2: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0b8f3300-14c1-41b9-b4c0-26abbf8e03d2 in namespace emptydir-wrapper-3610, will wait for the garbage collector to delete the pods Apr 25 21:37:35.928: INFO: Deleting ReplicationController wrapped-volume-race-0b8f3300-14c1-41b9-b4c0-26abbf8e03d2 took: 6.256395ms Apr 25 21:37:36.330: INFO: Terminating ReplicationController wrapped-volume-race-0b8f3300-14c1-41b9-b4c0-26abbf8e03d2 pods took: 402.28938ms STEP: Creating RC which spawns configmap-volume pods Apr 25 21:37:50.364: INFO: Pod name wrapped-volume-race-f9cf9652-fc35-4c29-bb39-bb8e47bf72ea: Found 0 pods out of 5 Apr 25 21:37:55.371: INFO: Pod name wrapped-volume-race-f9cf9652-fc35-4c29-bb39-bb8e47bf72ea: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f9cf9652-fc35-4c29-bb39-bb8e47bf72ea in namespace emptydir-wrapper-3610, will wait for the garbage collector to delete the pods Apr 25 21:38:09.456: INFO: Deleting ReplicationController wrapped-volume-race-f9cf9652-fc35-4c29-bb39-bb8e47bf72ea took: 7.965881ms Apr 25 21:38:09.856: INFO: Terminating ReplicationController wrapped-volume-race-f9cf9652-fc35-4c29-bb39-bb8e47bf72ea pods took: 400.222801ms STEP: Creating RC which spawns configmap-volume pods Apr 25 21:38:20.590: INFO: Pod name wrapped-volume-race-b1dad841-99e7-41d3-85da-abc7a6239903: Found 0 pods out of 5 Apr 25 21:38:25.597: INFO: Pod name wrapped-volume-race-b1dad841-99e7-41d3-85da-abc7a6239903: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-b1dad841-99e7-41d3-85da-abc7a6239903 in namespace emptydir-wrapper-3610, will wait for the garbage collector to delete the pods Apr 25 21:38:37.682: INFO: Deleting ReplicationController wrapped-volume-race-b1dad841-99e7-41d3-85da-abc7a6239903 took: 7.168065ms Apr 25 21:38:37.982: INFO: Terminating ReplicationController wrapped-volume-race-b1dad841-99e7-41d3-85da-abc7a6239903 pods took: 300.24496ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:38:50.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3610" for this suite. • [SLOW TEST:92.080 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":116,"skipped":1887,"failed":0} SSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:38:50.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9584 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9584 STEP: creating replication controller externalsvc in namespace services-9584 I0425 21:38:50.430515 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-9584, replica count: 2 I0425 21:38:53.480872 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0425 21:38:56.481055 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Apr 25 21:38:56.565: INFO: Creating new exec pod Apr 25 21:39:00.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9584 execpodtlntg -- /bin/sh -x -c nslookup clusterip-service' Apr 25 21:39:03.710: INFO: stderr: "I0425 21:39:03.598423 1402 log.go:172] (0xc0000f42c0) (0xc0008d20a0) Create stream\nI0425 21:39:03.598468 1402 log.go:172] (0xc0000f42c0) (0xc0008d20a0) Stream added, broadcasting: 1\nI0425 21:39:03.601766 1402 log.go:172] (0xc0000f42c0) Reply frame received for 1\nI0425 21:39:03.601832 1402 log.go:172] (0xc0000f42c0) (0xc00064de00) Create stream\nI0425 21:39:03.601864 1402 log.go:172] (0xc0000f42c0) (0xc00064de00) Stream added, broadcasting: 3\nI0425 21:39:03.602926 1402 log.go:172] (0xc0000f42c0) Reply frame received for 3\nI0425 21:39:03.602967 1402 log.go:172] (0xc0000f42c0) (0xc0008d2140) Create stream\nI0425 21:39:03.602990 1402 log.go:172] (0xc0000f42c0) (0xc0008d2140) Stream added, broadcasting: 5\nI0425 21:39:03.604341 1402 log.go:172] (0xc0000f42c0) Reply frame received for 5\nI0425 21:39:03.691373 1402 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0425 21:39:03.691422 1402 log.go:172] (0xc0008d2140) (5) Data frame handling\nI0425 21:39:03.691474 1402 log.go:172] (0xc0008d2140) (5) Data frame sent\n+ nslookup clusterip-service\nI0425 21:39:03.699317 1402 log.go:172] (0xc0000f42c0) Data frame received for 3\nI0425 21:39:03.699351 1402 log.go:172] (0xc00064de00) (3) Data frame handling\nI0425 21:39:03.699369 1402 log.go:172] (0xc00064de00) (3) Data frame sent\nI0425 21:39:03.700268 1402 log.go:172] (0xc0000f42c0) Data frame received for 3\nI0425 21:39:03.700299 1402 log.go:172] (0xc00064de00) (3) Data frame handling\nI0425 21:39:03.700331 1402 log.go:172] (0xc00064de00) (3) Data frame sent\nI0425 21:39:03.700764 1402 log.go:172] (0xc0000f42c0) Data frame received for 5\nI0425 21:39:03.700811 1402 log.go:172] (0xc0008d2140) (5) Data frame handling\nI0425 21:39:03.700841 1402 log.go:172] (0xc0000f42c0) Data frame received for 3\nI0425 21:39:03.700864 1402 log.go:172] (0xc00064de00) (3) Data frame handling\nI0425 21:39:03.703114 1402 log.go:172] (0xc0000f42c0) Data frame received for 1\nI0425 21:39:03.703147 1402 log.go:172] (0xc0008d20a0) (1) Data frame handling\nI0425 21:39:03.703167 1402 log.go:172] (0xc0008d20a0) (1) Data frame sent\nI0425 21:39:03.703181 1402 log.go:172] (0xc0000f42c0) (0xc0008d20a0) Stream removed, broadcasting: 1\nI0425 21:39:03.703263 1402 log.go:172] (0xc0000f42c0) Go away received\nI0425 21:39:03.703675 1402 log.go:172] (0xc0000f42c0) (0xc0008d20a0) Stream removed, broadcasting: 1\nI0425 21:39:03.703708 1402 log.go:172] (0xc0000f42c0) (0xc00064de00) Stream removed, broadcasting: 3\nI0425 21:39:03.703719 1402 log.go:172] (0xc0000f42c0) (0xc0008d2140) Stream removed, broadcasting: 5\n" Apr 25 21:39:03.710: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-9584.svc.cluster.local\tcanonical name = externalsvc.services-9584.svc.cluster.local.\nName:\texternalsvc.services-9584.svc.cluster.local\nAddress: 10.102.75.185\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9584, will wait for the garbage collector to delete the pods Apr 25 21:39:03.770: INFO: Deleting ReplicationController externalsvc took: 6.537554ms Apr 25 21:39:04.170: INFO: Terminating ReplicationController externalsvc pods took: 400.264594ms Apr 25 21:39:19.600: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:39:19.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9584" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:29.466 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":117,"skipped":1893,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:39:19.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:39:50.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3257" for this suite. STEP: Destroying namespace "nsdeletetest-2910" for this suite. Apr 25 21:39:50.946: INFO: Namespace nsdeletetest-2910 was already deleted STEP: Destroying namespace "nsdeletetest-1321" for this suite. • [SLOW TEST:31.277 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":118,"skipped":1901,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:39:50.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:40:51.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4867" for this suite. • [SLOW TEST:60.077 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":119,"skipped":1953,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:40:51.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-c7fc7916-f9ef-44c2-a51c-40c2980055ac STEP: Creating a pod to test consume secrets Apr 25 21:40:51.123: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-48a8a766-5dc8-4e33-89eb-b0ccb46559ea" in namespace "projected-7825" to be "success or failure" Apr 25 21:40:51.150: INFO: Pod "pod-projected-secrets-48a8a766-5dc8-4e33-89eb-b0ccb46559ea": Phase="Pending", Reason="", readiness=false. Elapsed: 26.985817ms Apr 25 21:40:53.154: INFO: Pod "pod-projected-secrets-48a8a766-5dc8-4e33-89eb-b0ccb46559ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031541038s Apr 25 21:40:55.158: INFO: Pod "pod-projected-secrets-48a8a766-5dc8-4e33-89eb-b0ccb46559ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035148785s STEP: Saw pod success Apr 25 21:40:55.158: INFO: Pod "pod-projected-secrets-48a8a766-5dc8-4e33-89eb-b0ccb46559ea" satisfied condition "success or failure" Apr 25 21:40:55.160: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-48a8a766-5dc8-4e33-89eb-b0ccb46559ea container projected-secret-volume-test: STEP: delete the pod Apr 25 21:40:55.223: INFO: Waiting for pod pod-projected-secrets-48a8a766-5dc8-4e33-89eb-b0ccb46559ea to disappear Apr 25 21:40:55.249: INFO: Pod pod-projected-secrets-48a8a766-5dc8-4e33-89eb-b0ccb46559ea no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:40:55.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7825" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1976,"failed":0} SS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:40:55.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 25 21:40:59.855: INFO: Successfully updated pod "pod-update-activedeadlineseconds-2912b73b-754b-4f6e-a463-ddac8ff9816c" Apr 25 21:40:59.855: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-2912b73b-754b-4f6e-a463-ddac8ff9816c" in namespace "pods-306" to be "terminated due to deadline exceeded" Apr 25 21:40:59.880: INFO: Pod "pod-update-activedeadlineseconds-2912b73b-754b-4f6e-a463-ddac8ff9816c": Phase="Running", Reason="", readiness=true. Elapsed: 24.860767ms Apr 25 21:41:01.884: INFO: Pod "pod-update-activedeadlineseconds-2912b73b-754b-4f6e-a463-ddac8ff9816c": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.029037342s Apr 25 21:41:01.884: INFO: Pod "pod-update-activedeadlineseconds-2912b73b-754b-4f6e-a463-ddac8ff9816c" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:41:01.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-306" for this suite. • [SLOW TEST:6.635 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1978,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:41:01.894: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-810e6c11-b06a-4533-8dc9-ddf88bbdf6b0 STEP: Creating a pod to test consume configMaps Apr 25 21:41:01.993: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f9f48aa1-6176-4dff-a6e5-13dcabd8b7c7" in namespace "projected-3052" to be "success or failure" Apr 25 21:41:02.019: INFO: Pod "pod-projected-configmaps-f9f48aa1-6176-4dff-a6e5-13dcabd8b7c7": Phase="Pending", Reason="", readiness=false. Elapsed: 26.029988ms Apr 25 21:41:04.024: INFO: Pod "pod-projected-configmaps-f9f48aa1-6176-4dff-a6e5-13dcabd8b7c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03070047s Apr 25 21:41:06.028: INFO: Pod "pod-projected-configmaps-f9f48aa1-6176-4dff-a6e5-13dcabd8b7c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034677599s STEP: Saw pod success Apr 25 21:41:06.028: INFO: Pod "pod-projected-configmaps-f9f48aa1-6176-4dff-a6e5-13dcabd8b7c7" satisfied condition "success or failure" Apr 25 21:41:06.031: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-f9f48aa1-6176-4dff-a6e5-13dcabd8b7c7 container projected-configmap-volume-test: STEP: delete the pod Apr 25 21:41:06.064: INFO: Waiting for pod pod-projected-configmaps-f9f48aa1-6176-4dff-a6e5-13dcabd8b7c7 to disappear Apr 25 21:41:06.069: INFO: Pod pod-projected-configmaps-f9f48aa1-6176-4dff-a6e5-13dcabd8b7c7 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:41:06.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3052" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2006,"failed":0} ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:41:06.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 25 21:41:10.192: INFO: &Pod{ObjectMeta:{send-events-b698ed5a-68ba-4e5f-b71c-ca238f809f52 events-505 /api/v1/namespaces/events-505/pods/send-events-b698ed5a-68ba-4e5f-b71c-ca238f809f52 c4ee5074-18e2-4fac-b2ab-28ce8a0bfb14 11023187 0 2020-04-25 21:41:06 +0000 UTC map[name:foo time:146812630] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wfrnp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wfrnp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wfrnp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.163,StartTime:2020-04-25 21:41:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-25 21:41:08 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://17a06b0a2f0618c8b06bd1614573d465e8b53e73fe0b02b7852a9d321bb79bef,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.163,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod Apr 25 21:41:12.212: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 25 21:41:14.216: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:41:14.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-505" for this suite. • [SLOW TEST:8.160 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":123,"skipped":2006,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:41:14.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-q97t STEP: Creating a pod to test atomic-volume-subpath Apr 25 21:41:14.411: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-q97t" in namespace "subpath-7640" to be "success or failure" Apr 25 21:41:14.414: INFO: Pod "pod-subpath-test-projected-q97t": Phase="Pending", Reason="", readiness=false. Elapsed: 3.422654ms Apr 25 21:41:16.418: INFO: Pod "pod-subpath-test-projected-q97t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007561276s Apr 25 21:41:18.423: INFO: Pod "pod-subpath-test-projected-q97t": Phase="Running", Reason="", readiness=true. Elapsed: 4.012157761s Apr 25 21:41:20.427: INFO: Pod "pod-subpath-test-projected-q97t": Phase="Running", Reason="", readiness=true. Elapsed: 6.016505994s Apr 25 21:41:22.432: INFO: Pod "pod-subpath-test-projected-q97t": Phase="Running", Reason="", readiness=true. Elapsed: 8.02075957s Apr 25 21:41:24.436: INFO: Pod "pod-subpath-test-projected-q97t": Phase="Running", Reason="", readiness=true. Elapsed: 10.025038469s Apr 25 21:41:26.440: INFO: Pod "pod-subpath-test-projected-q97t": Phase="Running", Reason="", readiness=true. Elapsed: 12.029438859s Apr 25 21:41:28.445: INFO: Pod "pod-subpath-test-projected-q97t": Phase="Running", Reason="", readiness=true. Elapsed: 14.034082047s Apr 25 21:41:30.449: INFO: Pod "pod-subpath-test-projected-q97t": Phase="Running", Reason="", readiness=true. Elapsed: 16.038152122s Apr 25 21:41:32.453: INFO: Pod "pod-subpath-test-projected-q97t": Phase="Running", Reason="", readiness=true. Elapsed: 18.042480543s Apr 25 21:41:34.458: INFO: Pod "pod-subpath-test-projected-q97t": Phase="Running", Reason="", readiness=true. Elapsed: 20.046780345s Apr 25 21:41:36.461: INFO: Pod "pod-subpath-test-projected-q97t": Phase="Running", Reason="", readiness=true. Elapsed: 22.050428789s Apr 25 21:41:38.465: INFO: Pod "pod-subpath-test-projected-q97t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054346301s STEP: Saw pod success Apr 25 21:41:38.465: INFO: Pod "pod-subpath-test-projected-q97t" satisfied condition "success or failure" Apr 25 21:41:38.468: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-q97t container test-container-subpath-projected-q97t: STEP: delete the pod Apr 25 21:41:38.527: INFO: Waiting for pod pod-subpath-test-projected-q97t to disappear Apr 25 21:41:38.537: INFO: Pod pod-subpath-test-projected-q97t no longer exists STEP: Deleting pod pod-subpath-test-projected-q97t Apr 25 21:41:38.537: INFO: Deleting pod "pod-subpath-test-projected-q97t" in namespace "subpath-7640" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:41:38.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7640" for this suite. • [SLOW TEST:24.311 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":124,"skipped":2020,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:41:38.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:41:38.592: INFO: Creating deployment "webserver-deployment" Apr 25 21:41:38.603: INFO: Waiting for observed generation 1 Apr 25 21:41:40.612: INFO: Waiting for all required pods to come up Apr 25 21:41:40.615: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 25 21:41:50.623: INFO: Waiting for deployment "webserver-deployment" to complete Apr 25 21:41:50.629: INFO: Updating deployment "webserver-deployment" with a non-existent image Apr 25 21:41:50.635: INFO: Updating deployment webserver-deployment Apr 25 21:41:50.635: INFO: Waiting for observed generation 2 Apr 25 21:41:52.665: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 25 21:41:52.668: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 25 21:41:52.670: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 25 21:41:52.679: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 25 21:41:52.679: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 25 21:41:52.682: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Apr 25 21:41:52.686: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Apr 25 21:41:52.686: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Apr 25 21:41:52.691: INFO: Updating deployment webserver-deployment Apr 25 21:41:52.691: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Apr 25 21:41:52.735: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 25 21:41:52.741: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 25 21:41:52.800: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9010 /apis/apps/v1/namespaces/deployment-9010/deployments/webserver-deployment a9fa39b2-5aaf-4722-a6a4-10120b2e4075 11023564 3 2020-04-25 21:41:38 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c0aa28 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-04-25 21:41:51 +0000 UTC,LastTransitionTime:2020-04-25 21:41:38 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-25 21:41:52 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Apr 25 21:41:52.917: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-9010 /apis/apps/v1/namespaces/deployment-9010/replicasets/webserver-deployment-c7997dcc8 5f135afd-dcc4-49e6-815c-9d4e512b7591 11023595 3 2020-04-25 21:41:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment a9fa39b2-5aaf-4722-a6a4-10120b2e4075 0xc003c0aef7 0xc003c0aef8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c0af68 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 25 21:41:52.917: INFO: All old ReplicaSets of Deployment "webserver-deployment": Apr 25 21:41:52.917: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-9010 /apis/apps/v1/namespaces/deployment-9010/replicasets/webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 11023603 3 2020-04-25 21:41:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment a9fa39b2-5aaf-4722-a6a4-10120b2e4075 0xc003c0ae37 0xc003c0ae38}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003c0ae98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Apr 25 21:41:52.983: INFO: Pod "webserver-deployment-595b5b9587-22qh7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-22qh7 webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-22qh7 01ba5110-143b-4949-a462-4f17cada966a 11023598 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc003c0b407 0xc003c0b408}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.983: INFO: Pod "webserver-deployment-595b5b9587-278ct" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-278ct webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-278ct 142d934b-d1c8-40be-8ffa-41c0bebe3207 11023584 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc003c0b527 0xc003c0b528}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.983: INFO: Pod "webserver-deployment-595b5b9587-2xw7b" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2xw7b webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-2xw7b 823c1293-7a4e-42b7-8f13-f7af917ff889 11023610 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc003c0b647 0xc003c0b648}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-25 21:41:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.984: INFO: Pod "webserver-deployment-595b5b9587-2z8t6" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2z8t6 webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-2z8t6 2909cd30-0cf9-437c-b6f3-cd4d793c989e 11023472 0 2020-04-25 21:41:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc003c0b7a7 0xc003c0b7a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.167,StartTime:2020-04-25 21:41:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-25 21:41:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://33619f6e71a2b7bdc27cd7afc6736d9f0f48b7c7c784483c4eec859fc6c86df3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.167,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.984: INFO: Pod "webserver-deployment-595b5b9587-bq65p" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bq65p webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-bq65p dcf12c7d-2778-42a8-a4d7-0aba9c553b12 11023465 0 2020-04-25 21:41:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc003c0b927 0xc003c0b928}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.5,StartTime:2020-04-25 21:41:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-25 21:41:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cd52144d48b1c8860d88eaa456720d04707492711e721b7b01f9d7c70d213423,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.984: INFO: Pod "webserver-deployment-595b5b9587-d6rqf" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d6rqf webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-d6rqf 25d7ba60-f901-455b-b34a-bb521c61022e 11023597 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc003c0bac7 0xc003c0bac8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.984: INFO: Pod "webserver-deployment-595b5b9587-ddh7g" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ddh7g webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-ddh7g b4b575d2-0f14-44f9-a222-0da74e8d0082 11023591 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc003c0bbe7 0xc003c0bbe8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.985: INFO: Pod "webserver-deployment-595b5b9587-dnqw5" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dnqw5 webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-dnqw5 07979701-93c5-4f7e-8cfc-9759aa827160 11023576 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc003c0bd07 0xc003c0bd08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.985: INFO: Pod "webserver-deployment-595b5b9587-fwsq6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fwsq6 webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-fwsq6 93914999-7509-45eb-a965-77a747126c45 11023602 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc003c0be27 0xc003c0be28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.985: INFO: Pod "webserver-deployment-595b5b9587-fxvz7" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fxvz7 webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-fxvz7 80c10e66-72c7-405d-a651-e816473ba279 11023407 0 2020-04-25 21:41:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc003c0bf67 0xc003c0bf68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.2,StartTime:2020-04-25 21:41:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-25 21:41:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://261228860f2a35a09bbe7a319b6f443b3f594084e0c5142b3809f0dae78c474d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.986: INFO: Pod "webserver-deployment-595b5b9587-gh827" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gh827 webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-gh827 762ffacc-b804-49f8-af55-1c5fbc2e08e8 11023478 0 2020-04-25 21:41:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc002af0077 0xc002af0078}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.168,StartTime:2020-04-25 21:41:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-25 21:41:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bc30688fd6cee55c9719bce895f4a1c2ec1f5dccca2508d7b28b56a0d5f73eb8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.168,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.986: INFO: Pod "webserver-deployment-595b5b9587-gmv5v" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-gmv5v webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-gmv5v 573cd901-d93f-4d9b-9809-d4b0d10a368c 11023421 0 2020-04-25 21:41:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc002af01f7 0xc002af01f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.165,StartTime:2020-04-25 21:41:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-25 21:41:44 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5b82e7a4c37aaa61682b1f2b103c73f33d49e462c59c0b4cb4b1624c432623e4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.165,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.986: INFO: Pod "webserver-deployment-595b5b9587-jtkx4" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jtkx4 webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-jtkx4 816f4a2a-fe7e-409b-b714-b1b0457f04ab 11023593 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc002af0377 0xc002af0378}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.986: INFO: Pod "webserver-deployment-595b5b9587-l4vjs" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-l4vjs webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-l4vjs 5b0a458a-333c-42a8-9d21-1c7222d8016c 11023463 0 2020-04-25 21:41:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc002af0497 0xc002af0498}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.4,StartTime:2020-04-25 21:41:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-25 21:41:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e281f30c3db4781562b440182d76a2957a571c99164bcce57d121a04596ed4b0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.986: INFO: Pod "webserver-deployment-595b5b9587-l6wpm" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-l6wpm webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-l6wpm 386d3cdb-8452-449e-ad6f-abb5da3aacab 11023600 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc002af0617 0xc002af0618}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.987: INFO: Pod "webserver-deployment-595b5b9587-qdffq" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qdffq webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-qdffq eed84a3e-f57b-446c-ad31-1b53f19c9dbf 11023397 0 2020-04-25 21:41:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc002af0737 0xc002af0738}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.164,StartTime:2020-04-25 21:41:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-25 21:41:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1267439e4ec130f56994af7c762df670fa75a8dbd09b9cccb835ce2edd58fa13,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.164,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.987: INFO: Pod "webserver-deployment-595b5b9587-skk2x" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-skk2x webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-skk2x 93995647-9ee3-44ff-85a3-b59c8db98990 11023601 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc002af08b7 0xc002af08b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.987: INFO: Pod "webserver-deployment-595b5b9587-v2nqx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v2nqx webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-v2nqx 5223ac9d-f961-42aa-a9d8-848d2d9163eb 11023594 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc002af09d7 0xc002af09d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.987: INFO: Pod "webserver-deployment-595b5b9587-v2vw5" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v2vw5 webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-v2vw5 52eae29a-30c4-4897-b611-0f4c0f74b12f 11023460 0 2020-04-25 21:41:38 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc002af0af7 0xc002af0af8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.6,StartTime:2020-04-25 21:41:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-25 21:41:47 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://aba0e8ab3df7041002c61cc4becf3a89d719c14999cf7fdbab849e76c15b2973,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.987: INFO: Pod "webserver-deployment-595b5b9587-wrllz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wrllz webserver-deployment-595b5b9587- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-595b5b9587-wrllz 20855350-b71f-40cc-a83c-e5af6b27d179 11023605 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 df34f69a-a528-4741-a929-d67b0383ccdc 0xc002af0c77 0xc002af0c78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-25 21:41:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.988: INFO: Pod "webserver-deployment-c7997dcc8-6gct7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6gct7 webserver-deployment-c7997dcc8- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-c7997dcc8-6gct7 572feee6-9b31-4827-84b8-4f844c533cd5 11023580 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5f135afd-dcc4-49e6-815c-9d4e512b7591 0xc002af0dd7 0xc002af0dd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.988: INFO: Pod "webserver-deployment-c7997dcc8-br2xb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-br2xb webserver-deployment-c7997dcc8- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-c7997dcc8-br2xb 99cf1851-d0c0-48b0-8c6a-82791d18d380 11023514 0 2020-04-25 21:41:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5f135afd-dcc4-49e6-815c-9d4e512b7591 0xc002af0f07 0xc002af0f08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-25 21:41:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.988: INFO: Pod "webserver-deployment-c7997dcc8-cp4n4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cp4n4 webserver-deployment-c7997dcc8- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-c7997dcc8-cp4n4 38181e72-dcfa-47ab-944c-adaf267587e5 11023524 0 2020-04-25 21:41:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5f135afd-dcc4-49e6-815c-9d4e512b7591 0xc002af1087 0xc002af1088}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-25 21:41:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.988: INFO: Pod "webserver-deployment-c7997dcc8-dv98x" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-dv98x webserver-deployment-c7997dcc8- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-c7997dcc8-dv98x 41a3f638-947e-487a-8e8c-3933279233ac 11023583 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5f135afd-dcc4-49e6-815c-9d4e512b7591 0xc002af1207 0xc002af1208}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.988: INFO: Pod "webserver-deployment-c7997dcc8-j8lk4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-j8lk4 webserver-deployment-c7997dcc8- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-c7997dcc8-j8lk4 5fe4f189-366d-4a72-bbbd-f926917835de 11023578 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5f135afd-dcc4-49e6-815c-9d4e512b7591 0xc002af1337 0xc002af1338}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.988: INFO: Pod "webserver-deployment-c7997dcc8-klx46" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-klx46 webserver-deployment-c7997dcc8- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-c7997dcc8-klx46 314915fc-dfcf-4a1e-8aa8-307543b8f075 11023539 0 2020-04-25 21:41:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5f135afd-dcc4-49e6-815c-9d4e512b7591 0xc002af1467 0xc002af1468}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-25 21:41:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.988: INFO: Pod "webserver-deployment-c7997dcc8-lwpr5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lwpr5 webserver-deployment-c7997dcc8- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-c7997dcc8-lwpr5 e068703e-0ab2-4769-b747-ec5bfd0624c7 11023582 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5f135afd-dcc4-49e6-815c-9d4e512b7591 0xc002af15e7 0xc002af15e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.989: INFO: Pod "webserver-deployment-c7997dcc8-mbz78" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mbz78 webserver-deployment-c7997dcc8- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-c7997dcc8-mbz78 b9507078-1154-43c7-93f4-cf639ec88fd4 11023577 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5f135afd-dcc4-49e6-815c-9d4e512b7591 0xc002af1717 0xc002af1718}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.989: INFO: Pod "webserver-deployment-c7997dcc8-qk7pn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qk7pn webserver-deployment-c7997dcc8- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-c7997dcc8-qk7pn 649d446d-cede-4a34-abcd-39b7709db8e5 11023561 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5f135afd-dcc4-49e6-815c-9d4e512b7591 0xc002af1847 0xc002af1848}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.989: INFO: Pod "webserver-deployment-c7997dcc8-s87ct" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s87ct webserver-deployment-c7997dcc8- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-c7997dcc8-s87ct 55ab05a4-2ff4-4950-86de-8f746da2215f 11023541 0 2020-04-25 21:41:51 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5f135afd-dcc4-49e6-815c-9d4e512b7591 0xc002af1987 0xc002af1988}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-25 21:41:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.989: INFO: Pod "webserver-deployment-c7997dcc8-v7h6l" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v7h6l webserver-deployment-c7997dcc8- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-c7997dcc8-v7h6l a6063658-9803-4a3e-b7ba-beb4385bc1fa 11023517 0 2020-04-25 21:41:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5f135afd-dcc4-49e6-815c-9d4e512b7591 0xc002af1b07 0xc002af1b08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-04-25 21:41:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.989: INFO: Pod "webserver-deployment-c7997dcc8-x92sc" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-x92sc webserver-deployment-c7997dcc8- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-c7997dcc8-x92sc af96dc32-8a18-41d5-b0bb-8f57d8cd7d74 11023592 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5f135afd-dcc4-49e6-815c-9d4e512b7591 0xc002af1c87 0xc002af1c88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:41:52.990: INFO: Pod "webserver-deployment-c7997dcc8-zlwfl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zlwfl webserver-deployment-c7997dcc8- deployment-9010 /api/v1/namespaces/deployment-9010/pods/webserver-deployment-c7997dcc8-zlwfl ce3bbcac-02e9-4ac0-a4e2-b03b91a785da 11023599 0 2020-04-25 21:41:52 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 5f135afd-dcc4-49e6-815c-9d4e512b7591 0xc002af1db7 0xc002af1db8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lvjl2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lvjl2,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lvjl2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:41:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:41:52.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9010" for this suite. • [SLOW TEST:14.719 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":125,"skipped":2043,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:41:53.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition Apr 25 21:41:53.502: INFO: Waiting up to 5m0s for pod "var-expansion-e6b98021-a276-4ad7-a7ea-bf268f911db0" in namespace "var-expansion-8340" to be "success or failure" Apr 25 21:41:53.523: INFO: Pod "var-expansion-e6b98021-a276-4ad7-a7ea-bf268f911db0": Phase="Pending", Reason="", readiness=false. Elapsed: 20.717048ms Apr 25 21:41:55.527: INFO: Pod "var-expansion-e6b98021-a276-4ad7-a7ea-bf268f911db0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02462969s Apr 25 21:41:57.610: INFO: Pod "var-expansion-e6b98021-a276-4ad7-a7ea-bf268f911db0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107257337s Apr 25 21:42:00.173: INFO: Pod "var-expansion-e6b98021-a276-4ad7-a7ea-bf268f911db0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.670783834s Apr 25 21:42:02.247: INFO: Pod "var-expansion-e6b98021-a276-4ad7-a7ea-bf268f911db0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.744911344s Apr 25 21:42:04.807: INFO: Pod "var-expansion-e6b98021-a276-4ad7-a7ea-bf268f911db0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.304458283s Apr 25 21:42:07.168: INFO: Pod "var-expansion-e6b98021-a276-4ad7-a7ea-bf268f911db0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.665472317s Apr 25 21:42:09.225: INFO: Pod "var-expansion-e6b98021-a276-4ad7-a7ea-bf268f911db0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.722903993s Apr 25 21:42:11.265: INFO: Pod "var-expansion-e6b98021-a276-4ad7-a7ea-bf268f911db0": Phase="Running", Reason="", readiness=true. Elapsed: 17.762822316s Apr 25 21:42:13.436: INFO: Pod "var-expansion-e6b98021-a276-4ad7-a7ea-bf268f911db0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 19.933076595s STEP: Saw pod success Apr 25 21:42:13.436: INFO: Pod "var-expansion-e6b98021-a276-4ad7-a7ea-bf268f911db0" satisfied condition "success or failure" Apr 25 21:42:13.462: INFO: Trying to get logs from node jerma-worker pod var-expansion-e6b98021-a276-4ad7-a7ea-bf268f911db0 container dapi-container: STEP: delete the pod Apr 25 21:42:13.822: INFO: Waiting for pod var-expansion-e6b98021-a276-4ad7-a7ea-bf268f911db0 to disappear Apr 25 21:42:13.908: INFO: Pod var-expansion-e6b98021-a276-4ad7-a7ea-bf268f911db0 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:42:13.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8340" for this suite. • [SLOW TEST:20.994 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2045,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:42:14.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 25 21:42:14.722: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8863 /api/v1/namespaces/watch-8863/configmaps/e2e-watch-test-configmap-a c3f09f0a-e8ef-4345-b230-5cb1c4fb8652 11023901 0 2020-04-25 21:42:14 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 25 21:42:14.722: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8863 /api/v1/namespaces/watch-8863/configmaps/e2e-watch-test-configmap-a c3f09f0a-e8ef-4345-b230-5cb1c4fb8652 11023901 0 2020-04-25 21:42:14 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 25 21:42:24.734: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8863 /api/v1/namespaces/watch-8863/configmaps/e2e-watch-test-configmap-a c3f09f0a-e8ef-4345-b230-5cb1c4fb8652 11024033 0 2020-04-25 21:42:14 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 25 21:42:24.734: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8863 /api/v1/namespaces/watch-8863/configmaps/e2e-watch-test-configmap-a c3f09f0a-e8ef-4345-b230-5cb1c4fb8652 11024033 0 2020-04-25 21:42:14 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 25 21:42:34.742: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8863 /api/v1/namespaces/watch-8863/configmaps/e2e-watch-test-configmap-a c3f09f0a-e8ef-4345-b230-5cb1c4fb8652 11024064 0 2020-04-25 21:42:14 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 25 21:42:34.742: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8863 /api/v1/namespaces/watch-8863/configmaps/e2e-watch-test-configmap-a c3f09f0a-e8ef-4345-b230-5cb1c4fb8652 11024064 0 2020-04-25 21:42:14 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 25 21:42:44.749: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8863 /api/v1/namespaces/watch-8863/configmaps/e2e-watch-test-configmap-a c3f09f0a-e8ef-4345-b230-5cb1c4fb8652 11024094 0 2020-04-25 21:42:14 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 25 21:42:44.749: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-8863 /api/v1/namespaces/watch-8863/configmaps/e2e-watch-test-configmap-a c3f09f0a-e8ef-4345-b230-5cb1c4fb8652 11024094 0 2020-04-25 21:42:14 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 25 21:42:54.756: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8863 /api/v1/namespaces/watch-8863/configmaps/e2e-watch-test-configmap-b 799ac3fa-e403-4be2-b920-84b6c2654162 11024125 0 2020-04-25 21:42:54 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 25 21:42:54.757: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8863 /api/v1/namespaces/watch-8863/configmaps/e2e-watch-test-configmap-b 799ac3fa-e403-4be2-b920-84b6c2654162 11024125 0 2020-04-25 21:42:54 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 25 21:43:04.763: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8863 /api/v1/namespaces/watch-8863/configmaps/e2e-watch-test-configmap-b 799ac3fa-e403-4be2-b920-84b6c2654162 11024155 0 2020-04-25 21:42:54 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 25 21:43:04.763: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-8863 /api/v1/namespaces/watch-8863/configmaps/e2e-watch-test-configmap-b 799ac3fa-e403-4be2-b920-84b6c2654162 11024155 0 2020-04-25 21:42:54 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:43:14.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8863" for this suite. • [SLOW TEST:60.512 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":127,"skipped":2069,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:43:14.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 25 21:43:14.851: INFO: Waiting up to 5m0s for pod "pod-5220374d-59f1-4395-ab09-d82d566e326f" in namespace "emptydir-2476" to be "success or failure" Apr 25 21:43:14.860: INFO: Pod "pod-5220374d-59f1-4395-ab09-d82d566e326f": Phase="Pending", Reason="", readiness=false. Elapsed: 9.197028ms Apr 25 21:43:16.864: INFO: Pod "pod-5220374d-59f1-4395-ab09-d82d566e326f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012497347s Apr 25 21:43:18.868: INFO: Pod "pod-5220374d-59f1-4395-ab09-d82d566e326f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016815799s STEP: Saw pod success Apr 25 21:43:18.868: INFO: Pod "pod-5220374d-59f1-4395-ab09-d82d566e326f" satisfied condition "success or failure" Apr 25 21:43:18.870: INFO: Trying to get logs from node jerma-worker pod pod-5220374d-59f1-4395-ab09-d82d566e326f container test-container: STEP: delete the pod Apr 25 21:43:18.943: INFO: Waiting for pod pod-5220374d-59f1-4395-ab09-d82d566e326f to disappear Apr 25 21:43:18.959: INFO: Pod pod-5220374d-59f1-4395-ab09-d82d566e326f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:43:18.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2476" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2143,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:43:18.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 25 21:43:19.048: INFO: Waiting up to 5m0s for pod "downward-api-0ded9a5e-5b34-4502-9951-08ad0e995ee4" in namespace "downward-api-12" to be "success or failure" Apr 25 21:43:19.055: INFO: Pod "downward-api-0ded9a5e-5b34-4502-9951-08ad0e995ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.723178ms Apr 25 21:43:21.058: INFO: Pod "downward-api-0ded9a5e-5b34-4502-9951-08ad0e995ee4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010052785s Apr 25 21:43:23.062: INFO: Pod "downward-api-0ded9a5e-5b34-4502-9951-08ad0e995ee4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014008762s STEP: Saw pod success Apr 25 21:43:23.062: INFO: Pod "downward-api-0ded9a5e-5b34-4502-9951-08ad0e995ee4" satisfied condition "success or failure" Apr 25 21:43:23.065: INFO: Trying to get logs from node jerma-worker2 pod downward-api-0ded9a5e-5b34-4502-9951-08ad0e995ee4 container dapi-container: STEP: delete the pod Apr 25 21:43:23.104: INFO: Waiting for pod downward-api-0ded9a5e-5b34-4502-9951-08ad0e995ee4 to disappear Apr 25 21:43:23.115: INFO: Pod downward-api-0ded9a5e-5b34-4502-9951-08ad0e995ee4 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:43:23.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-12" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":129,"skipped":2202,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:43:23.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 25 21:43:23.226: INFO: Waiting up to 5m0s for pod "downward-api-3e9f7143-a9d2-4d6e-bb47-a31af734f2d2" in namespace "downward-api-3311" to be "success or failure" Apr 25 21:43:23.229: INFO: Pod "downward-api-3e9f7143-a9d2-4d6e-bb47-a31af734f2d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.517962ms Apr 25 21:43:25.232: INFO: Pod "downward-api-3e9f7143-a9d2-4d6e-bb47-a31af734f2d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00650074s Apr 25 21:43:27.236: INFO: Pod "downward-api-3e9f7143-a9d2-4d6e-bb47-a31af734f2d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010367434s STEP: Saw pod success Apr 25 21:43:27.236: INFO: Pod "downward-api-3e9f7143-a9d2-4d6e-bb47-a31af734f2d2" satisfied condition "success or failure" Apr 25 21:43:27.239: INFO: Trying to get logs from node jerma-worker2 pod downward-api-3e9f7143-a9d2-4d6e-bb47-a31af734f2d2 container dapi-container: STEP: delete the pod Apr 25 21:43:27.291: INFO: Waiting for pod downward-api-3e9f7143-a9d2-4d6e-bb47-a31af734f2d2 to disappear Apr 25 21:43:27.322: INFO: Pod downward-api-3e9f7143-a9d2-4d6e-bb47-a31af734f2d2 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:43:27.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3311" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2232,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:43:27.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-8113 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 25 21:43:27.390: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 25 21:43:47.556: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.23:8080/dial?request=hostname&protocol=http&host=10.244.1.22&port=8080&tries=1'] Namespace:pod-network-test-8113 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 21:43:47.556: INFO: >>> kubeConfig: /root/.kube/config I0425 21:43:47.581820 6 log.go:172] (0xc0020ffb80) (0xc002722c80) Create stream I0425 21:43:47.581847 6 log.go:172] (0xc0020ffb80) (0xc002722c80) Stream added, broadcasting: 1 I0425 21:43:47.583888 6 log.go:172] (0xc0020ffb80) Reply frame received for 1 I0425 21:43:47.583930 6 log.go:172] (0xc0020ffb80) (0xc001a91040) Create stream I0425 21:43:47.583952 6 log.go:172] (0xc0020ffb80) (0xc001a91040) Stream added, broadcasting: 3 I0425 21:43:47.584785 6 log.go:172] (0xc0020ffb80) Reply frame received for 3 I0425 21:43:47.584822 6 log.go:172] (0xc0020ffb80) (0xc001a912c0) Create stream I0425 21:43:47.584838 6 log.go:172] (0xc0020ffb80) (0xc001a912c0) Stream added, broadcasting: 5 I0425 21:43:47.585915 6 log.go:172] (0xc0020ffb80) Reply frame received for 5 I0425 21:43:47.656421 6 log.go:172] (0xc0020ffb80) Data frame received for 3 I0425 21:43:47.656454 6 log.go:172] (0xc001a91040) (3) Data frame handling I0425 21:43:47.656476 6 log.go:172] (0xc001a91040) (3) Data frame sent I0425 21:43:47.657595 6 log.go:172] (0xc0020ffb80) Data frame received for 3 I0425 21:43:47.657674 6 log.go:172] (0xc001a91040) (3) Data frame handling I0425 21:43:47.657720 6 log.go:172] (0xc0020ffb80) Data frame received for 5 I0425 21:43:47.657739 6 log.go:172] (0xc001a912c0) (5) Data frame handling I0425 21:43:47.659246 6 log.go:172] (0xc0020ffb80) Data frame received for 1 I0425 21:43:47.659277 6 log.go:172] (0xc002722c80) (1) Data frame handling I0425 21:43:47.659316 6 log.go:172] (0xc002722c80) (1) Data frame sent I0425 21:43:47.659331 6 log.go:172] (0xc0020ffb80) (0xc002722c80) Stream removed, broadcasting: 1 I0425 21:43:47.659366 6 log.go:172] (0xc0020ffb80) Go away received I0425 21:43:47.659394 6 log.go:172] (0xc0020ffb80) (0xc002722c80) Stream removed, broadcasting: 1 I0425 21:43:47.659409 6 log.go:172] (0xc0020ffb80) (0xc001a91040) Stream removed, broadcasting: 3 I0425 21:43:47.659417 6 log.go:172] (0xc0020ffb80) (0xc001a912c0) Stream removed, broadcasting: 5 Apr 25 21:43:47.659: INFO: Waiting for responses: map[] Apr 25 21:43:47.662: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.23:8080/dial?request=hostname&protocol=http&host=10.244.2.183&port=8080&tries=1'] Namespace:pod-network-test-8113 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 21:43:47.663: INFO: >>> kubeConfig: /root/.kube/config I0425 21:43:47.690538 6 log.go:172] (0xc0048986e0) (0xc001a91ea0) Create stream I0425 21:43:47.690571 6 log.go:172] (0xc0048986e0) (0xc001a91ea0) Stream added, broadcasting: 1 I0425 21:43:47.692839 6 log.go:172] (0xc0048986e0) Reply frame received for 1 I0425 21:43:47.692892 6 log.go:172] (0xc0048986e0) (0xc00231c000) Create stream I0425 21:43:47.692912 6 log.go:172] (0xc0048986e0) (0xc00231c000) Stream added, broadcasting: 3 I0425 21:43:47.694055 6 log.go:172] (0xc0048986e0) Reply frame received for 3 I0425 21:43:47.694097 6 log.go:172] (0xc0048986e0) (0xc001a91f40) Create stream I0425 21:43:47.694112 6 log.go:172] (0xc0048986e0) (0xc001a91f40) Stream added, broadcasting: 5 I0425 21:43:47.695029 6 log.go:172] (0xc0048986e0) Reply frame received for 5 I0425 21:43:47.768743 6 log.go:172] (0xc0048986e0) Data frame received for 3 I0425 21:43:47.768765 6 log.go:172] (0xc00231c000) (3) Data frame handling I0425 21:43:47.768776 6 log.go:172] (0xc00231c000) (3) Data frame sent I0425 21:43:47.769237 6 log.go:172] (0xc0048986e0) Data frame received for 3 I0425 21:43:47.769253 6 log.go:172] (0xc00231c000) (3) Data frame handling I0425 21:43:47.769543 6 log.go:172] (0xc0048986e0) Data frame received for 5 I0425 21:43:47.769555 6 log.go:172] (0xc001a91f40) (5) Data frame handling I0425 21:43:47.771151 6 log.go:172] (0xc0048986e0) Data frame received for 1 I0425 21:43:47.771172 6 log.go:172] (0xc001a91ea0) (1) Data frame handling I0425 21:43:47.771193 6 log.go:172] (0xc001a91ea0) (1) Data frame sent I0425 21:43:47.771208 6 log.go:172] (0xc0048986e0) (0xc001a91ea0) Stream removed, broadcasting: 1 I0425 21:43:47.771299 6 log.go:172] (0xc0048986e0) (0xc001a91ea0) Stream removed, broadcasting: 1 I0425 21:43:47.771332 6 log.go:172] (0xc0048986e0) (0xc00231c000) Stream removed, broadcasting: 3 I0425 21:43:47.771399 6 log.go:172] (0xc0048986e0) Go away received I0425 21:43:47.771556 6 log.go:172] (0xc0048986e0) (0xc001a91f40) Stream removed, broadcasting: 5 Apr 25 21:43:47.771: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:43:47.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8113" for this suite. • [SLOW TEST:20.433 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2261,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:43:47.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 25 21:43:47.842: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d55e3c06-3880-4d7f-a550-49cc375e49fa" in namespace "projected-3302" to be "success or failure" Apr 25 21:43:47.867: INFO: Pod "downwardapi-volume-d55e3c06-3880-4d7f-a550-49cc375e49fa": Phase="Pending", Reason="", readiness=false. Elapsed: 25.058064ms Apr 25 21:43:49.871: INFO: Pod "downwardapi-volume-d55e3c06-3880-4d7f-a550-49cc375e49fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028824713s Apr 25 21:43:51.876: INFO: Pod "downwardapi-volume-d55e3c06-3880-4d7f-a550-49cc375e49fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033724854s STEP: Saw pod success Apr 25 21:43:51.876: INFO: Pod "downwardapi-volume-d55e3c06-3880-4d7f-a550-49cc375e49fa" satisfied condition "success or failure" Apr 25 21:43:51.880: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d55e3c06-3880-4d7f-a550-49cc375e49fa container client-container: STEP: delete the pod Apr 25 21:43:51.915: INFO: Waiting for pod downwardapi-volume-d55e3c06-3880-4d7f-a550-49cc375e49fa to disappear Apr 25 21:43:51.930: INFO: Pod downwardapi-volume-d55e3c06-3880-4d7f-a550-49cc375e49fa no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:43:51.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3302" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2289,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:43:51.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-50044583-216d-4d41-97e2-d593d3b9b7e7 STEP: Creating a pod to test consume secrets Apr 25 21:43:52.016: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-372c4d7a-7568-42b1-831d-172abedc293b" in namespace "projected-4495" to be "success or failure" Apr 25 21:43:52.020: INFO: Pod "pod-projected-secrets-372c4d7a-7568-42b1-831d-172abedc293b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.611742ms Apr 25 21:43:54.102: INFO: Pod "pod-projected-secrets-372c4d7a-7568-42b1-831d-172abedc293b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085422877s Apr 25 21:43:56.106: INFO: Pod "pod-projected-secrets-372c4d7a-7568-42b1-831d-172abedc293b": Phase="Running", Reason="", readiness=true. Elapsed: 4.089884594s Apr 25 21:43:58.110: INFO: Pod "pod-projected-secrets-372c4d7a-7568-42b1-831d-172abedc293b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.094018803s STEP: Saw pod success Apr 25 21:43:58.110: INFO: Pod "pod-projected-secrets-372c4d7a-7568-42b1-831d-172abedc293b" satisfied condition "success or failure" Apr 25 21:43:58.113: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-372c4d7a-7568-42b1-831d-172abedc293b container secret-volume-test: STEP: delete the pod Apr 25 21:43:58.146: INFO: Waiting for pod pod-projected-secrets-372c4d7a-7568-42b1-831d-172abedc293b to disappear Apr 25 21:43:58.161: INFO: Pod pod-projected-secrets-372c4d7a-7568-42b1-831d-172abedc293b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:43:58.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4495" for this suite. • [SLOW TEST:6.229 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2297,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:43:58.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 25 21:43:58.277: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ec1037a-fede-4e3b-a0d9-e98770a44dc8" in namespace "projected-5323" to be "success or failure" Apr 25 21:43:58.281: INFO: Pod "downwardapi-volume-1ec1037a-fede-4e3b-a0d9-e98770a44dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.743694ms Apr 25 21:44:00.292: INFO: Pod "downwardapi-volume-1ec1037a-fede-4e3b-a0d9-e98770a44dc8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015416231s Apr 25 21:44:02.297: INFO: Pod "downwardapi-volume-1ec1037a-fede-4e3b-a0d9-e98770a44dc8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020162122s STEP: Saw pod success Apr 25 21:44:02.297: INFO: Pod "downwardapi-volume-1ec1037a-fede-4e3b-a0d9-e98770a44dc8" satisfied condition "success or failure" Apr 25 21:44:02.300: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-1ec1037a-fede-4e3b-a0d9-e98770a44dc8 container client-container: STEP: delete the pod Apr 25 21:44:02.318: INFO: Waiting for pod downwardapi-volume-1ec1037a-fede-4e3b-a0d9-e98770a44dc8 to disappear Apr 25 21:44:02.322: INFO: Pod downwardapi-volume-1ec1037a-fede-4e3b-a0d9-e98770a44dc8 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:44:02.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5323" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":134,"skipped":2316,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:44:02.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 25 21:44:02.450: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:44:02.454: INFO: Number of nodes with available pods: 0 Apr 25 21:44:02.454: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:44:03.500: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:44:03.505: INFO: Number of nodes with available pods: 0 Apr 25 21:44:03.505: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:44:04.482: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:44:04.544: INFO: Number of nodes with available pods: 0 Apr 25 21:44:04.544: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:44:05.459: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:44:05.462: INFO: Number of nodes with available pods: 0 Apr 25 21:44:05.462: INFO: Node jerma-worker is running more than one daemon pod Apr 25 21:44:06.459: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:44:06.463: INFO: Number of nodes with available pods: 2 Apr 25 21:44:06.463: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 25 21:44:06.482: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 21:44:06.500: INFO: Number of nodes with available pods: 2 Apr 25 21:44:06.500: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9331, will wait for the garbage collector to delete the pods Apr 25 21:44:07.612: INFO: Deleting DaemonSet.extensions daemon-set took: 5.672985ms Apr 25 21:44:07.913: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.275212ms Apr 25 21:44:19.316: INFO: Number of nodes with available pods: 0 Apr 25 21:44:19.316: INFO: Number of running nodes: 0, number of available pods: 0 Apr 25 21:44:19.320: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9331/daemonsets","resourceVersion":"11024644"},"items":null} Apr 25 21:44:19.322: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9331/pods","resourceVersion":"11024644"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:44:19.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9331" for this suite. • [SLOW TEST:17.012 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":135,"skipped":2320,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:44:19.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 25 21:44:19.445: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f92c2c9-d68b-4b5d-82ee-ef2fe65fcc59" in namespace "projected-476" to be "success or failure" Apr 25 21:44:19.473: INFO: Pod "downwardapi-volume-9f92c2c9-d68b-4b5d-82ee-ef2fe65fcc59": Phase="Pending", Reason="", readiness=false. Elapsed: 27.415234ms Apr 25 21:44:21.476: INFO: Pod "downwardapi-volume-9f92c2c9-d68b-4b5d-82ee-ef2fe65fcc59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030867194s Apr 25 21:44:23.480: INFO: Pod "downwardapi-volume-9f92c2c9-d68b-4b5d-82ee-ef2fe65fcc59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034863396s STEP: Saw pod success Apr 25 21:44:23.480: INFO: Pod "downwardapi-volume-9f92c2c9-d68b-4b5d-82ee-ef2fe65fcc59" satisfied condition "success or failure" Apr 25 21:44:23.483: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-9f92c2c9-d68b-4b5d-82ee-ef2fe65fcc59 container client-container: STEP: delete the pod Apr 25 21:44:23.516: INFO: Waiting for pod downwardapi-volume-9f92c2c9-d68b-4b5d-82ee-ef2fe65fcc59 to disappear Apr 25 21:44:23.520: INFO: Pod downwardapi-volume-9f92c2c9-d68b-4b5d-82ee-ef2fe65fcc59 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:44:23.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-476" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2343,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:44:23.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-0c78da14-a61f-46f1-8bcc-eb9cbb9b1124 STEP: Creating a pod to test consume secrets Apr 25 21:44:23.636: INFO: Waiting up to 5m0s for pod "pod-secrets-63e16f5f-9098-400e-80ed-c59752911a31" in namespace "secrets-3567" to be "success or failure" Apr 25 21:44:23.653: INFO: Pod "pod-secrets-63e16f5f-9098-400e-80ed-c59752911a31": Phase="Pending", Reason="", readiness=false. Elapsed: 17.020776ms Apr 25 21:44:25.657: INFO: Pod "pod-secrets-63e16f5f-9098-400e-80ed-c59752911a31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021418262s Apr 25 21:44:27.662: INFO: Pod "pod-secrets-63e16f5f-9098-400e-80ed-c59752911a31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025637578s STEP: Saw pod success Apr 25 21:44:27.662: INFO: Pod "pod-secrets-63e16f5f-9098-400e-80ed-c59752911a31" satisfied condition "success or failure" Apr 25 21:44:27.665: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-63e16f5f-9098-400e-80ed-c59752911a31 container secret-volume-test: STEP: delete the pod Apr 25 21:44:27.715: INFO: Waiting for pod pod-secrets-63e16f5f-9098-400e-80ed-c59752911a31 to disappear Apr 25 21:44:27.742: INFO: Pod pod-secrets-63e16f5f-9098-400e-80ed-c59752911a31 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:44:27.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3567" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2353,"failed":0} SS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:44:27.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 25 21:44:32.342: INFO: Successfully updated pod "pod-update-51a67790-94be-468f-9d42-3e50b9ed2a63" STEP: verifying the updated pod is in kubernetes Apr 25 21:44:32.353: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:44:32.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4149" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":138,"skipped":2355,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:44:32.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token Apr 25 21:44:32.929: INFO: created pod pod-service-account-defaultsa Apr 25 21:44:32.929: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 25 21:44:32.942: INFO: created pod pod-service-account-mountsa Apr 25 21:44:32.942: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 25 21:44:32.974: INFO: created pod pod-service-account-nomountsa Apr 25 21:44:32.974: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 25 21:44:32.998: INFO: created pod pod-service-account-defaultsa-mountspec Apr 25 21:44:32.998: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 25 21:44:33.049: INFO: created pod pod-service-account-mountsa-mountspec Apr 25 21:44:33.049: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 25 21:44:33.058: INFO: created pod pod-service-account-nomountsa-mountspec Apr 25 21:44:33.058: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 25 21:44:33.077: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 25 21:44:33.077: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 25 21:44:33.121: INFO: created pod pod-service-account-mountsa-nomountspec Apr 25 21:44:33.122: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 25 21:44:33.222: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 25 21:44:33.222: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:44:33.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1982" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":139,"skipped":2374,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:44:33.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-cafd099d-e9c3-42a2-90ae-8098b7e7a57b STEP: Creating a pod to test consume configMaps Apr 25 21:44:33.511: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c380b815-758d-4c4e-b219-b8df610362fb" in namespace "projected-6808" to be "success or failure" Apr 25 21:44:33.557: INFO: Pod "pod-projected-configmaps-c380b815-758d-4c4e-b219-b8df610362fb": Phase="Pending", Reason="", readiness=false. Elapsed: 45.739936ms Apr 25 21:44:35.695: INFO: Pod "pod-projected-configmaps-c380b815-758d-4c4e-b219-b8df610362fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.18434965s Apr 25 21:44:38.008: INFO: Pod "pod-projected-configmaps-c380b815-758d-4c4e-b219-b8df610362fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.497053752s Apr 25 21:44:40.121: INFO: Pod "pod-projected-configmaps-c380b815-758d-4c4e-b219-b8df610362fb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.610280122s Apr 25 21:44:42.175: INFO: Pod "pod-projected-configmaps-c380b815-758d-4c4e-b219-b8df610362fb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.66367381s Apr 25 21:44:44.190: INFO: Pod "pod-projected-configmaps-c380b815-758d-4c4e-b219-b8df610362fb": Phase="Running", Reason="", readiness=true. Elapsed: 10.678768164s Apr 25 21:44:46.196: INFO: Pod "pod-projected-configmaps-c380b815-758d-4c4e-b219-b8df610362fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.684486347s STEP: Saw pod success Apr 25 21:44:46.196: INFO: Pod "pod-projected-configmaps-c380b815-758d-4c4e-b219-b8df610362fb" satisfied condition "success or failure" Apr 25 21:44:46.201: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-c380b815-758d-4c4e-b219-b8df610362fb container projected-configmap-volume-test: STEP: delete the pod Apr 25 21:44:46.232: INFO: Waiting for pod pod-projected-configmaps-c380b815-758d-4c4e-b219-b8df610362fb to disappear Apr 25 21:44:46.243: INFO: Pod pod-projected-configmaps-c380b815-758d-4c4e-b219-b8df610362fb no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:44:46.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6808" for this suite. • [SLOW TEST:12.956 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2383,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:44:46.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-0e9724d4-6333-4289-acc2-a0d931a9115b STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:44:52.470: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4420" for this suite. • [SLOW TEST:6.208 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2400,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:44:52.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 25 21:44:52.571: INFO: Waiting up to 5m0s for pod "pod-58b8126c-2376-4174-8faf-367e18ffdde6" in namespace "emptydir-6551" to be "success or failure" Apr 25 21:44:52.576: INFO: Pod "pod-58b8126c-2376-4174-8faf-367e18ffdde6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.428083ms Apr 25 21:44:54.593: INFO: Pod "pod-58b8126c-2376-4174-8faf-367e18ffdde6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021855446s Apr 25 21:44:56.597: INFO: Pod "pod-58b8126c-2376-4174-8faf-367e18ffdde6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025813123s STEP: Saw pod success Apr 25 21:44:56.597: INFO: Pod "pod-58b8126c-2376-4174-8faf-367e18ffdde6" satisfied condition "success or failure" Apr 25 21:44:56.601: INFO: Trying to get logs from node jerma-worker2 pod pod-58b8126c-2376-4174-8faf-367e18ffdde6 container test-container: STEP: delete the pod Apr 25 21:44:56.631: INFO: Waiting for pod pod-58b8126c-2376-4174-8faf-367e18ffdde6 to disappear Apr 25 21:44:56.635: INFO: Pod pod-58b8126c-2376-4174-8faf-367e18ffdde6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:44:56.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6551" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2462,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:44:56.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 25 21:44:56.739: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a64b4ac8-e501-41fd-b03e-26742258de90" in namespace "projected-8138" to be "success or failure" Apr 25 21:44:56.759: INFO: Pod "downwardapi-volume-a64b4ac8-e501-41fd-b03e-26742258de90": Phase="Pending", Reason="", readiness=false. Elapsed: 19.191907ms Apr 25 21:44:58.768: INFO: Pod "downwardapi-volume-a64b4ac8-e501-41fd-b03e-26742258de90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028162709s Apr 25 21:45:00.771: INFO: Pod "downwardapi-volume-a64b4ac8-e501-41fd-b03e-26742258de90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03204722s STEP: Saw pod success Apr 25 21:45:00.772: INFO: Pod "downwardapi-volume-a64b4ac8-e501-41fd-b03e-26742258de90" satisfied condition "success or failure" Apr 25 21:45:00.774: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a64b4ac8-e501-41fd-b03e-26742258de90 container client-container: STEP: delete the pod Apr 25 21:45:00.812: INFO: Waiting for pod downwardapi-volume-a64b4ac8-e501-41fd-b03e-26742258de90 to disappear Apr 25 21:45:00.830: INFO: Pod downwardapi-volume-a64b4ac8-e501-41fd-b03e-26742258de90 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:45:00.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8138" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":143,"skipped":2463,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:45:00.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1750.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1750.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 25 21:45:07.000: INFO: DNS probes using dns-1750/dns-test-fb19690f-9991-4ef1-93c8-f37e9efc2954 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:45:07.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1750" for this suite. • [SLOW TEST:6.215 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":144,"skipped":2498,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:45:07.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:45:07.667: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 60.364543ms) Apr 25 21:45:07.684: INFO: (1) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 17.04816ms) Apr 25 21:45:07.689: INFO: (2) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.448026ms) Apr 25 21:45:07.692: INFO: (3) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.477421ms) Apr 25 21:45:07.696: INFO: (4) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.566903ms) Apr 25 21:45:07.700: INFO: (5) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.087838ms) Apr 25 21:45:07.704: INFO: (6) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.997975ms) Apr 25 21:45:07.756: INFO: (7) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 51.619642ms) Apr 25 21:45:07.759: INFO: (8) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.493195ms) Apr 25 21:45:07.762: INFO: (9) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.005517ms) Apr 25 21:45:07.765: INFO: (10) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.672578ms) Apr 25 21:45:07.768: INFO: (11) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.562347ms) Apr 25 21:45:07.771: INFO: (12) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.803846ms) Apr 25 21:45:07.773: INFO: (13) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.516441ms) Apr 25 21:45:07.776: INFO: (14) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.494908ms) Apr 25 21:45:07.779: INFO: (15) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.058098ms) Apr 25 21:45:07.782: INFO: (16) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.858216ms) Apr 25 21:45:07.784: INFO: (17) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.697689ms) Apr 25 21:45:07.787: INFO: (18) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.064194ms) Apr 25 21:45:07.790: INFO: (19) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.884045ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:45:07.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9453" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":145,"skipped":2511,"failed":0} SSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:45:07.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-6865 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-6865 I0425 21:45:07.926759 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-6865, replica count: 2 I0425 21:45:10.977294 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0425 21:45:13.977562 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 25 21:45:13.977: INFO: Creating new exec pod Apr 25 21:45:19.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6865 execpodj5h6d -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 25 21:45:19.262: INFO: stderr: "I0425 21:45:19.158015 1435 log.go:172] (0xc00050d080) (0xc0005861e0) Create stream\nI0425 21:45:19.158083 1435 log.go:172] (0xc00050d080) (0xc0005861e0) Stream added, broadcasting: 1\nI0425 21:45:19.160751 1435 log.go:172] (0xc00050d080) Reply frame received for 1\nI0425 21:45:19.160796 1435 log.go:172] (0xc00050d080) (0xc000586280) Create stream\nI0425 21:45:19.160810 1435 log.go:172] (0xc00050d080) (0xc000586280) Stream added, broadcasting: 3\nI0425 21:45:19.161870 1435 log.go:172] (0xc00050d080) Reply frame received for 3\nI0425 21:45:19.161915 1435 log.go:172] (0xc00050d080) (0xc000586320) Create stream\nI0425 21:45:19.161929 1435 log.go:172] (0xc00050d080) (0xc000586320) Stream added, broadcasting: 5\nI0425 21:45:19.162954 1435 log.go:172] (0xc00050d080) Reply frame received for 5\nI0425 21:45:19.255703 1435 log.go:172] (0xc00050d080) Data frame received for 5\nI0425 21:45:19.255735 1435 log.go:172] (0xc000586320) (5) Data frame handling\nI0425 21:45:19.255752 1435 log.go:172] (0xc000586320) (5) Data frame sent\nI0425 21:45:19.255761 1435 log.go:172] (0xc00050d080) Data frame received for 5\nI0425 21:45:19.255768 1435 log.go:172] (0xc000586320) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0425 21:45:19.255787 1435 log.go:172] (0xc000586320) (5) Data frame sent\nI0425 21:45:19.255922 1435 log.go:172] (0xc00050d080) Data frame received for 3\nI0425 21:45:19.255946 1435 log.go:172] (0xc000586280) (3) Data frame handling\nI0425 21:45:19.255988 1435 log.go:172] (0xc00050d080) Data frame received for 5\nI0425 21:45:19.256008 1435 log.go:172] (0xc000586320) (5) Data frame handling\nI0425 21:45:19.257615 1435 log.go:172] (0xc00050d080) Data frame received for 1\nI0425 21:45:19.257637 1435 log.go:172] (0xc0005861e0) (1) Data frame handling\nI0425 21:45:19.257655 1435 log.go:172] (0xc0005861e0) (1) Data frame sent\nI0425 21:45:19.257682 1435 log.go:172] (0xc00050d080) (0xc0005861e0) Stream removed, broadcasting: 1\nI0425 21:45:19.257696 1435 log.go:172] (0xc00050d080) Go away received\nI0425 21:45:19.258012 1435 log.go:172] (0xc00050d080) (0xc0005861e0) Stream removed, broadcasting: 1\nI0425 21:45:19.258030 1435 log.go:172] (0xc00050d080) (0xc000586280) Stream removed, broadcasting: 3\nI0425 21:45:19.258040 1435 log.go:172] (0xc00050d080) (0xc000586320) Stream removed, broadcasting: 5\n" Apr 25 21:45:19.262: INFO: stdout: "" Apr 25 21:45:19.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6865 execpodj5h6d -- /bin/sh -x -c nc -zv -t -w 2 10.103.1.12 80' Apr 25 21:45:19.485: INFO: stderr: "I0425 21:45:19.403100 1455 log.go:172] (0xc000a6f080) (0xc000b4e780) Create stream\nI0425 21:45:19.403170 1455 log.go:172] (0xc000a6f080) (0xc000b4e780) Stream added, broadcasting: 1\nI0425 21:45:19.408138 1455 log.go:172] (0xc000a6f080) Reply frame received for 1\nI0425 21:45:19.408194 1455 log.go:172] (0xc000a6f080) (0xc0006f8640) Create stream\nI0425 21:45:19.408213 1455 log.go:172] (0xc000a6f080) (0xc0006f8640) Stream added, broadcasting: 3\nI0425 21:45:19.409568 1455 log.go:172] (0xc000a6f080) Reply frame received for 3\nI0425 21:45:19.409596 1455 log.go:172] (0xc000a6f080) (0xc0004eb400) Create stream\nI0425 21:45:19.409606 1455 log.go:172] (0xc000a6f080) (0xc0004eb400) Stream added, broadcasting: 5\nI0425 21:45:19.410624 1455 log.go:172] (0xc000a6f080) Reply frame received for 5\nI0425 21:45:19.478290 1455 log.go:172] (0xc000a6f080) Data frame received for 3\nI0425 21:45:19.478315 1455 log.go:172] (0xc0006f8640) (3) Data frame handling\nI0425 21:45:19.478333 1455 log.go:172] (0xc000a6f080) Data frame received for 5\nI0425 21:45:19.478357 1455 log.go:172] (0xc0004eb400) (5) Data frame handling\nI0425 21:45:19.478384 1455 log.go:172] (0xc0004eb400) (5) Data frame sent\nI0425 21:45:19.478410 1455 log.go:172] (0xc000a6f080) Data frame received for 5\nI0425 21:45:19.478428 1455 log.go:172] (0xc0004eb400) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.1.12 80\nConnection to 10.103.1.12 80 port [tcp/http] succeeded!\nI0425 21:45:19.479835 1455 log.go:172] (0xc000a6f080) Data frame received for 1\nI0425 21:45:19.479863 1455 log.go:172] (0xc000b4e780) (1) Data frame handling\nI0425 21:45:19.479881 1455 log.go:172] (0xc000b4e780) (1) Data frame sent\nI0425 21:45:19.479900 1455 log.go:172] (0xc000a6f080) (0xc000b4e780) Stream removed, broadcasting: 1\nI0425 21:45:19.479920 1455 log.go:172] (0xc000a6f080) Go away received\nI0425 21:45:19.480314 1455 log.go:172] (0xc000a6f080) (0xc000b4e780) Stream removed, broadcasting: 1\nI0425 21:45:19.480348 1455 log.go:172] (0xc000a6f080) (0xc0006f8640) Stream removed, broadcasting: 3\nI0425 21:45:19.480359 1455 log.go:172] (0xc000a6f080) (0xc0004eb400) Stream removed, broadcasting: 5\n" Apr 25 21:45:19.485: INFO: stdout: "" Apr 25 21:45:19.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6865 execpodj5h6d -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31343' Apr 25 21:45:19.708: INFO: stderr: "I0425 21:45:19.623453 1476 log.go:172] (0xc000a2c000) (0xc00066c6e0) Create stream\nI0425 21:45:19.623525 1476 log.go:172] (0xc000a2c000) (0xc00066c6e0) Stream added, broadcasting: 1\nI0425 21:45:19.627356 1476 log.go:172] (0xc000a2c000) Reply frame received for 1\nI0425 21:45:19.627428 1476 log.go:172] (0xc000a2c000) (0xc000944000) Create stream\nI0425 21:45:19.627448 1476 log.go:172] (0xc000a2c000) (0xc000944000) Stream added, broadcasting: 3\nI0425 21:45:19.628675 1476 log.go:172] (0xc000a2c000) Reply frame received for 3\nI0425 21:45:19.628722 1476 log.go:172] (0xc000a2c000) (0xc000521540) Create stream\nI0425 21:45:19.628740 1476 log.go:172] (0xc000a2c000) (0xc000521540) Stream added, broadcasting: 5\nI0425 21:45:19.629899 1476 log.go:172] (0xc000a2c000) Reply frame received for 5\nI0425 21:45:19.701674 1476 log.go:172] (0xc000a2c000) Data frame received for 3\nI0425 21:45:19.701730 1476 log.go:172] (0xc000a2c000) Data frame received for 5\nI0425 21:45:19.701765 1476 log.go:172] (0xc000521540) (5) Data frame handling\nI0425 21:45:19.701782 1476 log.go:172] (0xc000521540) (5) Data frame sent\nI0425 21:45:19.701797 1476 log.go:172] (0xc000a2c000) Data frame received for 5\nI0425 21:45:19.701832 1476 log.go:172] (0xc000521540) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.10 31343\nConnection to 172.17.0.10 31343 port [tcp/31343] succeeded!\nI0425 21:45:19.701866 1476 log.go:172] (0xc000944000) (3) Data frame handling\nI0425 21:45:19.703636 1476 log.go:172] (0xc000a2c000) Data frame received for 1\nI0425 21:45:19.703665 1476 log.go:172] (0xc00066c6e0) (1) Data frame handling\nI0425 21:45:19.703682 1476 log.go:172] (0xc00066c6e0) (1) Data frame sent\nI0425 21:45:19.703697 1476 log.go:172] (0xc000a2c000) (0xc00066c6e0) Stream removed, broadcasting: 1\nI0425 21:45:19.703802 1476 log.go:172] (0xc000a2c000) Go away received\nI0425 21:45:19.704297 1476 log.go:172] (0xc000a2c000) (0xc00066c6e0) Stream removed, broadcasting: 1\nI0425 21:45:19.704334 1476 log.go:172] (0xc000a2c000) (0xc000944000) Stream removed, broadcasting: 3\nI0425 21:45:19.704349 1476 log.go:172] (0xc000a2c000) (0xc000521540) Stream removed, broadcasting: 5\n" Apr 25 21:45:19.708: INFO: stdout: "" Apr 25 21:45:19.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6865 execpodj5h6d -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31343' Apr 25 21:45:19.930: INFO: stderr: "I0425 21:45:19.833032 1497 log.go:172] (0xc000910580) (0xc000990000) Create stream\nI0425 21:45:19.833085 1497 log.go:172] (0xc000910580) (0xc000990000) Stream added, broadcasting: 1\nI0425 21:45:19.844764 1497 log.go:172] (0xc000910580) Reply frame received for 1\nI0425 21:45:19.844844 1497 log.go:172] (0xc000910580) (0xc0009900a0) Create stream\nI0425 21:45:19.844869 1497 log.go:172] (0xc000910580) (0xc0009900a0) Stream added, broadcasting: 3\nI0425 21:45:19.846591 1497 log.go:172] (0xc000910580) Reply frame received for 3\nI0425 21:45:19.846632 1497 log.go:172] (0xc000910580) (0xc0006d7c20) Create stream\nI0425 21:45:19.846651 1497 log.go:172] (0xc000910580) (0xc0006d7c20) Stream added, broadcasting: 5\nI0425 21:45:19.847813 1497 log.go:172] (0xc000910580) Reply frame received for 5\nI0425 21:45:19.921530 1497 log.go:172] (0xc000910580) Data frame received for 5\nI0425 21:45:19.921577 1497 log.go:172] (0xc0006d7c20) (5) Data frame handling\nI0425 21:45:19.921601 1497 log.go:172] (0xc0006d7c20) (5) Data frame sent\nI0425 21:45:19.921621 1497 log.go:172] (0xc000910580) Data frame received for 5\nI0425 21:45:19.921642 1497 log.go:172] (0xc0006d7c20) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 31343\nConnection to 172.17.0.8 31343 port [tcp/31343] succeeded!\nI0425 21:45:19.921680 1497 log.go:172] (0xc0006d7c20) (5) Data frame sent\nI0425 21:45:19.921971 1497 log.go:172] (0xc000910580) Data frame received for 5\nI0425 21:45:19.921993 1497 log.go:172] (0xc0006d7c20) (5) Data frame handling\nI0425 21:45:19.922237 1497 log.go:172] (0xc000910580) Data frame received for 3\nI0425 21:45:19.922256 1497 log.go:172] (0xc0009900a0) (3) Data frame handling\nI0425 21:45:19.924821 1497 log.go:172] (0xc000910580) Data frame received for 1\nI0425 21:45:19.924850 1497 log.go:172] (0xc000990000) (1) Data frame handling\nI0425 21:45:19.924871 1497 log.go:172] (0xc000990000) (1) Data frame sent\nI0425 21:45:19.924914 1497 log.go:172] (0xc000910580) (0xc000990000) Stream removed, broadcasting: 1\nI0425 21:45:19.924980 1497 log.go:172] (0xc000910580) Go away received\nI0425 21:45:19.925523 1497 log.go:172] (0xc000910580) (0xc000990000) Stream removed, broadcasting: 1\nI0425 21:45:19.925556 1497 log.go:172] (0xc000910580) (0xc0009900a0) Stream removed, broadcasting: 3\nI0425 21:45:19.925570 1497 log.go:172] (0xc000910580) (0xc0006d7c20) Stream removed, broadcasting: 5\n" Apr 25 21:45:19.930: INFO: stdout: "" Apr 25 21:45:19.930: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:45:19.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6865" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.190 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":146,"skipped":2516,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:45:19.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 21:45:20.403: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 21:45:22.415: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447920, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447920, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447920, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723447920, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 21:45:25.453: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:45:25.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-38" for this suite. STEP: Destroying namespace "webhook-38-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.107 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":147,"skipped":2524,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:45:26.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-2fe2d7db-9662-4991-954f-54e4cd8336ae STEP: Creating a pod to test consume secrets Apr 25 21:45:26.255: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ddeb0a8e-d2a1-4b75-896a-586522aef8cd" in namespace "projected-9433" to be "success or failure" Apr 25 21:45:26.291: INFO: Pod "pod-projected-secrets-ddeb0a8e-d2a1-4b75-896a-586522aef8cd": Phase="Pending", Reason="", readiness=false. Elapsed: 35.272018ms Apr 25 21:45:28.294: INFO: Pod "pod-projected-secrets-ddeb0a8e-d2a1-4b75-896a-586522aef8cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039217145s Apr 25 21:45:30.299: INFO: Pod "pod-projected-secrets-ddeb0a8e-d2a1-4b75-896a-586522aef8cd": Phase="Running", Reason="", readiness=true. Elapsed: 4.043604897s Apr 25 21:45:32.303: INFO: Pod "pod-projected-secrets-ddeb0a8e-d2a1-4b75-896a-586522aef8cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047724798s STEP: Saw pod success Apr 25 21:45:32.303: INFO: Pod "pod-projected-secrets-ddeb0a8e-d2a1-4b75-896a-586522aef8cd" satisfied condition "success or failure" Apr 25 21:45:32.306: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-ddeb0a8e-d2a1-4b75-896a-586522aef8cd container projected-secret-volume-test: STEP: delete the pod Apr 25 21:45:32.324: INFO: Waiting for pod pod-projected-secrets-ddeb0a8e-d2a1-4b75-896a-586522aef8cd to disappear Apr 25 21:45:32.329: INFO: Pod pod-projected-secrets-ddeb0a8e-d2a1-4b75-896a-586522aef8cd no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:45:32.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9433" for this suite. • [SLOW TEST:6.239 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2567,"failed":0} [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:45:32.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:45:32.409: INFO: Creating ReplicaSet my-hostname-basic-5df37200-cf58-438a-81eb-22d1ed9561a6 Apr 25 21:45:32.432: INFO: Pod name my-hostname-basic-5df37200-cf58-438a-81eb-22d1ed9561a6: Found 0 pods out of 1 Apr 25 21:45:37.450: INFO: Pod name my-hostname-basic-5df37200-cf58-438a-81eb-22d1ed9561a6: Found 1 pods out of 1 Apr 25 21:45:37.450: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5df37200-cf58-438a-81eb-22d1ed9561a6" is running Apr 25 21:45:37.452: INFO: Pod "my-hostname-basic-5df37200-cf58-438a-81eb-22d1ed9561a6-x78mw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-25 21:45:32 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-25 21:45:35 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-25 21:45:35 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-25 21:45:32 +0000 UTC Reason: Message:}]) Apr 25 21:45:37.452: INFO: Trying to dial the pod Apr 25 21:45:42.481: INFO: Controller my-hostname-basic-5df37200-cf58-438a-81eb-22d1ed9561a6: Got expected result from replica 1 [my-hostname-basic-5df37200-cf58-438a-81eb-22d1ed9561a6-x78mw]: "my-hostname-basic-5df37200-cf58-438a-81eb-22d1ed9561a6-x78mw", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:45:42.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9810" for this suite. • [SLOW TEST:10.153 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":149,"skipped":2567,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:45:42.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 25 21:45:42.547: INFO: Waiting up to 5m0s for pod "pod-43af12b8-17ac-420b-8a0e-d1fb0a0a8f57" in namespace "emptydir-9549" to be "success or failure" Apr 25 21:45:42.551: INFO: Pod "pod-43af12b8-17ac-420b-8a0e-d1fb0a0a8f57": Phase="Pending", Reason="", readiness=false. Elapsed: 3.94295ms Apr 25 21:45:44.555: INFO: Pod "pod-43af12b8-17ac-420b-8a0e-d1fb0a0a8f57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008100311s Apr 25 21:45:46.559: INFO: Pod "pod-43af12b8-17ac-420b-8a0e-d1fb0a0a8f57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012089162s STEP: Saw pod success Apr 25 21:45:46.559: INFO: Pod "pod-43af12b8-17ac-420b-8a0e-d1fb0a0a8f57" satisfied condition "success or failure" Apr 25 21:45:46.562: INFO: Trying to get logs from node jerma-worker2 pod pod-43af12b8-17ac-420b-8a0e-d1fb0a0a8f57 container test-container: STEP: delete the pod Apr 25 21:45:46.579: INFO: Waiting for pod pod-43af12b8-17ac-420b-8a0e-d1fb0a0a8f57 to disappear Apr 25 21:45:46.582: INFO: Pod pod-43af12b8-17ac-420b-8a0e-d1fb0a0a8f57 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:45:46.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9549" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2574,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:45:46.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-798e6961-9056-4bbb-bf29-10e82d0299ae STEP: Creating configMap with name cm-test-opt-upd-2a9f8f3b-dfa3-45ea-a24b-c8fe12629e02 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-798e6961-9056-4bbb-bf29-10e82d0299ae STEP: Updating configmap cm-test-opt-upd-2a9f8f3b-dfa3-45ea-a24b-c8fe12629e02 STEP: Creating configMap with name cm-test-opt-create-b250a306-5fc2-4ee0-ac19-ffa674423629 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:45:56.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9766" for this suite. • [SLOW TEST:10.302 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2588,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:45:56.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-9b7fe0d6-c743-476d-aa78-acededc7279e STEP: Creating a pod to test consume secrets Apr 25 21:45:57.068: INFO: Waiting up to 5m0s for pod "pod-secrets-d534d84c-2fd2-427e-88fa-168c01192faf" in namespace "secrets-8444" to be "success or failure" Apr 25 21:45:57.072: INFO: Pod "pod-secrets-d534d84c-2fd2-427e-88fa-168c01192faf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.410152ms Apr 25 21:45:59.079: INFO: Pod "pod-secrets-d534d84c-2fd2-427e-88fa-168c01192faf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011168458s Apr 25 21:46:01.087: INFO: Pod "pod-secrets-d534d84c-2fd2-427e-88fa-168c01192faf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019058136s STEP: Saw pod success Apr 25 21:46:01.088: INFO: Pod "pod-secrets-d534d84c-2fd2-427e-88fa-168c01192faf" satisfied condition "success or failure" Apr 25 21:46:01.090: INFO: Trying to get logs from node jerma-worker pod pod-secrets-d534d84c-2fd2-427e-88fa-168c01192faf container secret-volume-test: STEP: delete the pod Apr 25 21:46:01.119: INFO: Waiting for pod pod-secrets-d534d84c-2fd2-427e-88fa-168c01192faf to disappear Apr 25 21:46:01.126: INFO: Pod pod-secrets-d534d84c-2fd2-427e-88fa-168c01192faf no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:46:01.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8444" for this suite. STEP: Destroying namespace "secret-namespace-9546" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2597,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:46:01.159: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-jss8 STEP: Creating a pod to test atomic-volume-subpath Apr 25 21:46:01.224: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jss8" in namespace "subpath-5647" to be "success or failure" Apr 25 21:46:01.228: INFO: Pod "pod-subpath-test-configmap-jss8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092851ms Apr 25 21:46:03.231: INFO: Pod "pod-subpath-test-configmap-jss8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007526967s Apr 25 21:46:05.235: INFO: Pod "pod-subpath-test-configmap-jss8": Phase="Running", Reason="", readiness=true. Elapsed: 4.011373347s Apr 25 21:46:07.238: INFO: Pod "pod-subpath-test-configmap-jss8": Phase="Running", Reason="", readiness=true. Elapsed: 6.014669315s Apr 25 21:46:09.242: INFO: Pod "pod-subpath-test-configmap-jss8": Phase="Running", Reason="", readiness=true. Elapsed: 8.01875028s Apr 25 21:46:11.248: INFO: Pod "pod-subpath-test-configmap-jss8": Phase="Running", Reason="", readiness=true. Elapsed: 10.024649369s Apr 25 21:46:13.265: INFO: Pod "pod-subpath-test-configmap-jss8": Phase="Running", Reason="", readiness=true. Elapsed: 12.041346166s Apr 25 21:46:15.269: INFO: Pod "pod-subpath-test-configmap-jss8": Phase="Running", Reason="", readiness=true. Elapsed: 14.045208569s Apr 25 21:46:17.277: INFO: Pod "pod-subpath-test-configmap-jss8": Phase="Running", Reason="", readiness=true. Elapsed: 16.053178368s Apr 25 21:46:19.281: INFO: Pod "pod-subpath-test-configmap-jss8": Phase="Running", Reason="", readiness=true. Elapsed: 18.057463911s Apr 25 21:46:21.285: INFO: Pod "pod-subpath-test-configmap-jss8": Phase="Running", Reason="", readiness=true. Elapsed: 20.060870042s Apr 25 21:46:23.288: INFO: Pod "pod-subpath-test-configmap-jss8": Phase="Running", Reason="", readiness=true. Elapsed: 22.064362829s Apr 25 21:46:25.292: INFO: Pod "pod-subpath-test-configmap-jss8": Phase="Running", Reason="", readiness=true. Elapsed: 24.068403173s Apr 25 21:46:27.296: INFO: Pod "pod-subpath-test-configmap-jss8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.072698201s STEP: Saw pod success Apr 25 21:46:27.296: INFO: Pod "pod-subpath-test-configmap-jss8" satisfied condition "success or failure" Apr 25 21:46:27.300: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-jss8 container test-container-subpath-configmap-jss8: STEP: delete the pod Apr 25 21:46:27.321: INFO: Waiting for pod pod-subpath-test-configmap-jss8 to disappear Apr 25 21:46:27.326: INFO: Pod pod-subpath-test-configmap-jss8 no longer exists STEP: Deleting pod pod-subpath-test-configmap-jss8 Apr 25 21:46:27.326: INFO: Deleting pod "pod-subpath-test-configmap-jss8" in namespace "subpath-5647" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:46:27.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5647" for this suite. • [SLOW TEST:26.195 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":153,"skipped":2616,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:46:27.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC Apr 25 21:46:27.427: INFO: namespace kubectl-2731 Apr 25 21:46:27.427: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2731' Apr 25 21:46:27.671: INFO: stderr: "" Apr 25 21:46:27.671: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Apr 25 21:46:28.676: INFO: Selector matched 1 pods for map[app:agnhost] Apr 25 21:46:28.676: INFO: Found 0 / 1 Apr 25 21:46:29.679: INFO: Selector matched 1 pods for map[app:agnhost] Apr 25 21:46:29.679: INFO: Found 0 / 1 Apr 25 21:46:30.676: INFO: Selector matched 1 pods for map[app:agnhost] Apr 25 21:46:30.676: INFO: Found 1 / 1 Apr 25 21:46:30.676: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 25 21:46:30.680: INFO: Selector matched 1 pods for map[app:agnhost] Apr 25 21:46:30.680: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 25 21:46:30.680: INFO: wait on agnhost-master startup in kubectl-2731 Apr 25 21:46:30.680: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-2nlnw agnhost-master --namespace=kubectl-2731' Apr 25 21:46:30.799: INFO: stderr: "" Apr 25 21:46:30.799: INFO: stdout: "Paused\n" STEP: exposing RC Apr 25 21:46:30.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2731' Apr 25 21:46:30.969: INFO: stderr: "" Apr 25 21:46:30.969: INFO: stdout: "service/rm2 exposed\n" Apr 25 21:46:30.972: INFO: Service rm2 in namespace kubectl-2731 found. STEP: exposing service Apr 25 21:46:32.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2731' Apr 25 21:46:33.146: INFO: stderr: "" Apr 25 21:46:33.146: INFO: stdout: "service/rm3 exposed\n" Apr 25 21:46:33.151: INFO: Service rm3 in namespace kubectl-2731 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:46:35.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2731" for this suite. • [SLOW TEST:7.812 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":154,"skipped":2662,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:46:35.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 25 21:46:35.219: INFO: >>> kubeConfig: /root/.kube/config Apr 25 21:46:37.674: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:46:48.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3153" for this suite. • [SLOW TEST:13.130 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":155,"skipped":2685,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:46:48.297: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Apr 25 21:46:48.945: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Apr 25 21:46:50.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448008, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448008, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448009, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448008, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 21:46:53.987: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:46:53.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:46:55.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1378" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.971 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":156,"skipped":2692,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:46:55.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 25 21:46:55.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-5425' Apr 25 21:46:55.420: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 25 21:46:55.420: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 Apr 25 21:46:57.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5425' Apr 25 21:46:57.623: INFO: stderr: "" Apr 25 21:46:57.623: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:46:57.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5425" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":157,"skipped":2725,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:46:57.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 25 21:47:07.863: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2291 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 21:47:07.863: INFO: >>> kubeConfig: /root/.kube/config I0425 21:47:07.896898 6 log.go:172] (0xc0041d2fd0) (0xc002723c20) Create stream I0425 21:47:07.896926 6 log.go:172] (0xc0041d2fd0) (0xc002723c20) Stream added, broadcasting: 1 I0425 21:47:07.898932 6 log.go:172] (0xc0041d2fd0) Reply frame received for 1 I0425 21:47:07.898976 6 log.go:172] (0xc0041d2fd0) (0xc001d9a140) Create stream I0425 21:47:07.898992 6 log.go:172] (0xc0041d2fd0) (0xc001d9a140) Stream added, broadcasting: 3 I0425 21:47:07.899922 6 log.go:172] (0xc0041d2fd0) Reply frame received for 3 I0425 21:47:07.899944 6 log.go:172] (0xc0041d2fd0) (0xc0020f7400) Create stream I0425 21:47:07.899950 6 log.go:172] (0xc0041d2fd0) (0xc0020f7400) Stream added, broadcasting: 5 I0425 21:47:07.900824 6 log.go:172] (0xc0041d2fd0) Reply frame received for 5 I0425 21:47:07.979883 6 log.go:172] (0xc0041d2fd0) Data frame received for 5 I0425 21:47:07.979911 6 log.go:172] (0xc0020f7400) (5) Data frame handling I0425 21:47:07.979959 6 log.go:172] (0xc0041d2fd0) Data frame received for 3 I0425 21:47:07.980011 6 log.go:172] (0xc001d9a140) (3) Data frame handling I0425 21:47:07.980040 6 log.go:172] (0xc001d9a140) (3) Data frame sent I0425 21:47:07.980067 6 log.go:172] (0xc0041d2fd0) Data frame received for 3 I0425 21:47:07.980079 6 log.go:172] (0xc001d9a140) (3) Data frame handling I0425 21:47:07.981357 6 log.go:172] (0xc0041d2fd0) Data frame received for 1 I0425 21:47:07.981391 6 log.go:172] (0xc002723c20) (1) Data frame handling I0425 21:47:07.981409 6 log.go:172] (0xc002723c20) (1) Data frame sent I0425 21:47:07.981425 6 log.go:172] (0xc0041d2fd0) (0xc002723c20) Stream removed, broadcasting: 1 I0425 21:47:07.981443 6 log.go:172] (0xc0041d2fd0) Go away received I0425 21:47:07.981596 6 log.go:172] (0xc0041d2fd0) (0xc002723c20) Stream removed, broadcasting: 1 I0425 21:47:07.981635 6 log.go:172] (0xc0041d2fd0) (0xc001d9a140) Stream removed, broadcasting: 3 I0425 21:47:07.981655 6 log.go:172] (0xc0041d2fd0) (0xc0020f7400) Stream removed, broadcasting: 5 Apr 25 21:47:07.981: INFO: Exec stderr: "" Apr 25 21:47:07.981: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2291 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 21:47:07.981: INFO: >>> kubeConfig: /root/.kube/config I0425 21:47:08.007410 6 log.go:172] (0xc0069d6370) (0xc001d9a640) Create stream I0425 21:47:08.007435 6 log.go:172] (0xc0069d6370) (0xc001d9a640) Stream added, broadcasting: 1 I0425 21:47:08.009067 6 log.go:172] (0xc0069d6370) Reply frame received for 1 I0425 21:47:08.009104 6 log.go:172] (0xc0069d6370) (0xc002723cc0) Create stream I0425 21:47:08.009228 6 log.go:172] (0xc0069d6370) (0xc002723cc0) Stream added, broadcasting: 3 I0425 21:47:08.010121 6 log.go:172] (0xc0069d6370) Reply frame received for 3 I0425 21:47:08.010166 6 log.go:172] (0xc0069d6370) (0xc0020f77c0) Create stream I0425 21:47:08.010184 6 log.go:172] (0xc0069d6370) (0xc0020f77c0) Stream added, broadcasting: 5 I0425 21:47:08.011016 6 log.go:172] (0xc0069d6370) Reply frame received for 5 I0425 21:47:08.071537 6 log.go:172] (0xc0069d6370) Data frame received for 3 I0425 21:47:08.071572 6 log.go:172] (0xc002723cc0) (3) Data frame handling I0425 21:47:08.071593 6 log.go:172] (0xc002723cc0) (3) Data frame sent I0425 21:47:08.071602 6 log.go:172] (0xc0069d6370) Data frame received for 3 I0425 21:47:08.071605 6 log.go:172] (0xc002723cc0) (3) Data frame handling I0425 21:47:08.071621 6 log.go:172] (0xc0069d6370) Data frame received for 5 I0425 21:47:08.071628 6 log.go:172] (0xc0020f77c0) (5) Data frame handling I0425 21:47:08.072743 6 log.go:172] (0xc0069d6370) Data frame received for 1 I0425 21:47:08.072771 6 log.go:172] (0xc001d9a640) (1) Data frame handling I0425 21:47:08.072800 6 log.go:172] (0xc001d9a640) (1) Data frame sent I0425 21:47:08.072883 6 log.go:172] (0xc0069d6370) (0xc001d9a640) Stream removed, broadcasting: 1 I0425 21:47:08.072926 6 log.go:172] (0xc0069d6370) Go away received I0425 21:47:08.073028 6 log.go:172] (0xc0069d6370) (0xc001d9a640) Stream removed, broadcasting: 1 I0425 21:47:08.073062 6 log.go:172] (0xc0069d6370) (0xc002723cc0) Stream removed, broadcasting: 3 I0425 21:47:08.073095 6 log.go:172] (0xc0069d6370) (0xc0020f77c0) Stream removed, broadcasting: 5 Apr 25 21:47:08.073: INFO: Exec stderr: "" Apr 25 21:47:08.073: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2291 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 21:47:08.073: INFO: >>> kubeConfig: /root/.kube/config I0425 21:47:08.103304 6 log.go:172] (0xc006fd4000) (0xc0020f7c20) Create stream I0425 21:47:08.103354 6 log.go:172] (0xc006fd4000) (0xc0020f7c20) Stream added, broadcasting: 1 I0425 21:47:08.105487 6 log.go:172] (0xc006fd4000) Reply frame received for 1 I0425 21:47:08.105528 6 log.go:172] (0xc006fd4000) (0xc00231c8c0) Create stream I0425 21:47:08.105544 6 log.go:172] (0xc006fd4000) (0xc00231c8c0) Stream added, broadcasting: 3 I0425 21:47:08.106261 6 log.go:172] (0xc006fd4000) Reply frame received for 3 I0425 21:47:08.106294 6 log.go:172] (0xc006fd4000) (0xc0020f7d60) Create stream I0425 21:47:08.106307 6 log.go:172] (0xc006fd4000) (0xc0020f7d60) Stream added, broadcasting: 5 I0425 21:47:08.106985 6 log.go:172] (0xc006fd4000) Reply frame received for 5 I0425 21:47:08.156373 6 log.go:172] (0xc006fd4000) Data frame received for 3 I0425 21:47:08.156406 6 log.go:172] (0xc00231c8c0) (3) Data frame handling I0425 21:47:08.156414 6 log.go:172] (0xc00231c8c0) (3) Data frame sent I0425 21:47:08.156421 6 log.go:172] (0xc006fd4000) Data frame received for 3 I0425 21:47:08.156427 6 log.go:172] (0xc00231c8c0) (3) Data frame handling I0425 21:47:08.156447 6 log.go:172] (0xc006fd4000) Data frame received for 5 I0425 21:47:08.156455 6 log.go:172] (0xc0020f7d60) (5) Data frame handling I0425 21:47:08.157896 6 log.go:172] (0xc006fd4000) Data frame received for 1 I0425 21:47:08.157924 6 log.go:172] (0xc0020f7c20) (1) Data frame handling I0425 21:47:08.157941 6 log.go:172] (0xc0020f7c20) (1) Data frame sent I0425 21:47:08.157957 6 log.go:172] (0xc006fd4000) (0xc0020f7c20) Stream removed, broadcasting: 1 I0425 21:47:08.157971 6 log.go:172] (0xc006fd4000) Go away received I0425 21:47:08.158076 6 log.go:172] (0xc006fd4000) (0xc0020f7c20) Stream removed, broadcasting: 1 I0425 21:47:08.158102 6 log.go:172] (0xc006fd4000) (0xc00231c8c0) Stream removed, broadcasting: 3 I0425 21:47:08.158122 6 log.go:172] (0xc006fd4000) (0xc0020f7d60) Stream removed, broadcasting: 5 Apr 25 21:47:08.158: INFO: Exec stderr: "" Apr 25 21:47:08.158: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2291 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 21:47:08.158: INFO: >>> kubeConfig: /root/.kube/config I0425 21:47:08.185943 6 log.go:172] (0xc0068284d0) (0xc00231cb40) Create stream I0425 21:47:08.185969 6 log.go:172] (0xc0068284d0) (0xc00231cb40) Stream added, broadcasting: 1 I0425 21:47:08.187578 6 log.go:172] (0xc0068284d0) Reply frame received for 1 I0425 21:47:08.187627 6 log.go:172] (0xc0068284d0) (0xc001e339a0) Create stream I0425 21:47:08.187644 6 log.go:172] (0xc0068284d0) (0xc001e339a0) Stream added, broadcasting: 3 I0425 21:47:08.188557 6 log.go:172] (0xc0068284d0) Reply frame received for 3 I0425 21:47:08.188600 6 log.go:172] (0xc0068284d0) (0xc00231cbe0) Create stream I0425 21:47:08.188615 6 log.go:172] (0xc0068284d0) (0xc00231cbe0) Stream added, broadcasting: 5 I0425 21:47:08.189530 6 log.go:172] (0xc0068284d0) Reply frame received for 5 I0425 21:47:08.257237 6 log.go:172] (0xc0068284d0) Data frame received for 5 I0425 21:47:08.257273 6 log.go:172] (0xc00231cbe0) (5) Data frame handling I0425 21:47:08.257353 6 log.go:172] (0xc0068284d0) Data frame received for 3 I0425 21:47:08.257390 6 log.go:172] (0xc001e339a0) (3) Data frame handling I0425 21:47:08.257418 6 log.go:172] (0xc001e339a0) (3) Data frame sent I0425 21:47:08.257458 6 log.go:172] (0xc0068284d0) Data frame received for 3 I0425 21:47:08.257478 6 log.go:172] (0xc001e339a0) (3) Data frame handling I0425 21:47:08.258781 6 log.go:172] (0xc0068284d0) Data frame received for 1 I0425 21:47:08.258799 6 log.go:172] (0xc00231cb40) (1) Data frame handling I0425 21:47:08.258809 6 log.go:172] (0xc00231cb40) (1) Data frame sent I0425 21:47:08.258822 6 log.go:172] (0xc0068284d0) (0xc00231cb40) Stream removed, broadcasting: 1 I0425 21:47:08.258841 6 log.go:172] (0xc0068284d0) Go away received I0425 21:47:08.258935 6 log.go:172] (0xc0068284d0) (0xc00231cb40) Stream removed, broadcasting: 1 I0425 21:47:08.258965 6 log.go:172] (0xc0068284d0) (0xc001e339a0) Stream removed, broadcasting: 3 I0425 21:47:08.258979 6 log.go:172] (0xc0068284d0) (0xc00231cbe0) Stream removed, broadcasting: 5 Apr 25 21:47:08.258: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 25 21:47:08.259: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2291 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 21:47:08.259: INFO: >>> kubeConfig: /root/.kube/config I0425 21:47:08.295245 6 log.go:172] (0xc006fd4630) (0xc001a00320) Create stream I0425 21:47:08.295268 6 log.go:172] (0xc006fd4630) (0xc001a00320) Stream added, broadcasting: 1 I0425 21:47:08.303347 6 log.go:172] (0xc006fd4630) Reply frame received for 1 I0425 21:47:08.303422 6 log.go:172] (0xc006fd4630) (0xc00231c000) Create stream I0425 21:47:08.303442 6 log.go:172] (0xc006fd4630) (0xc00231c000) Stream added, broadcasting: 3 I0425 21:47:08.304350 6 log.go:172] (0xc006fd4630) Reply frame received for 3 I0425 21:47:08.304393 6 log.go:172] (0xc006fd4630) (0xc00231c140) Create stream I0425 21:47:08.304407 6 log.go:172] (0xc006fd4630) (0xc00231c140) Stream added, broadcasting: 5 I0425 21:47:08.305366 6 log.go:172] (0xc006fd4630) Reply frame received for 5 I0425 21:47:08.365894 6 log.go:172] (0xc006fd4630) Data frame received for 3 I0425 21:47:08.365951 6 log.go:172] (0xc006fd4630) Data frame received for 5 I0425 21:47:08.366002 6 log.go:172] (0xc00231c140) (5) Data frame handling I0425 21:47:08.366030 6 log.go:172] (0xc00231c000) (3) Data frame handling I0425 21:47:08.366047 6 log.go:172] (0xc00231c000) (3) Data frame sent I0425 21:47:08.366062 6 log.go:172] (0xc006fd4630) Data frame received for 3 I0425 21:47:08.366075 6 log.go:172] (0xc00231c000) (3) Data frame handling I0425 21:47:08.367600 6 log.go:172] (0xc006fd4630) Data frame received for 1 I0425 21:47:08.367633 6 log.go:172] (0xc001a00320) (1) Data frame handling I0425 21:47:08.367668 6 log.go:172] (0xc001a00320) (1) Data frame sent I0425 21:47:08.367997 6 log.go:172] (0xc006fd4630) (0xc001a00320) Stream removed, broadcasting: 1 I0425 21:47:08.368071 6 log.go:172] (0xc006fd4630) Go away received I0425 21:47:08.368097 6 log.go:172] (0xc006fd4630) (0xc001a00320) Stream removed, broadcasting: 1 I0425 21:47:08.368118 6 log.go:172] (0xc006fd4630) (0xc00231c000) Stream removed, broadcasting: 3 I0425 21:47:08.368158 6 log.go:172] (0xc006fd4630) (0xc00231c140) Stream removed, broadcasting: 5 Apr 25 21:47:08.368: INFO: Exec stderr: "" Apr 25 21:47:08.368: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2291 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 21:47:08.368: INFO: >>> kubeConfig: /root/.kube/config I0425 21:47:08.399431 6 log.go:172] (0xc0028bc000) (0xc00231c320) Create stream I0425 21:47:08.399454 6 log.go:172] (0xc0028bc000) (0xc00231c320) Stream added, broadcasting: 1 I0425 21:47:08.401844 6 log.go:172] (0xc0028bc000) Reply frame received for 1 I0425 21:47:08.401888 6 log.go:172] (0xc0028bc000) (0xc001e58000) Create stream I0425 21:47:08.401906 6 log.go:172] (0xc0028bc000) (0xc001e58000) Stream added, broadcasting: 3 I0425 21:47:08.402964 6 log.go:172] (0xc0028bc000) Reply frame received for 3 I0425 21:47:08.403002 6 log.go:172] (0xc0028bc000) (0xc00231c3c0) Create stream I0425 21:47:08.403017 6 log.go:172] (0xc0028bc000) (0xc00231c3c0) Stream added, broadcasting: 5 I0425 21:47:08.404273 6 log.go:172] (0xc0028bc000) Reply frame received for 5 I0425 21:47:08.472980 6 log.go:172] (0xc0028bc000) Data frame received for 3 I0425 21:47:08.473040 6 log.go:172] (0xc001e58000) (3) Data frame handling I0425 21:47:08.473062 6 log.go:172] (0xc001e58000) (3) Data frame sent I0425 21:47:08.473078 6 log.go:172] (0xc0028bc000) Data frame received for 3 I0425 21:47:08.473101 6 log.go:172] (0xc001e58000) (3) Data frame handling I0425 21:47:08.473278 6 log.go:172] (0xc0028bc000) Data frame received for 5 I0425 21:47:08.473302 6 log.go:172] (0xc00231c3c0) (5) Data frame handling I0425 21:47:08.474964 6 log.go:172] (0xc0028bc000) Data frame received for 1 I0425 21:47:08.474993 6 log.go:172] (0xc00231c320) (1) Data frame handling I0425 21:47:08.475007 6 log.go:172] (0xc00231c320) (1) Data frame sent I0425 21:47:08.475031 6 log.go:172] (0xc0028bc000) (0xc00231c320) Stream removed, broadcasting: 1 I0425 21:47:08.475067 6 log.go:172] (0xc0028bc000) Go away received I0425 21:47:08.475155 6 log.go:172] (0xc0028bc000) (0xc00231c320) Stream removed, broadcasting: 1 I0425 21:47:08.475181 6 log.go:172] (0xc0028bc000) (0xc001e58000) Stream removed, broadcasting: 3 I0425 21:47:08.475193 6 log.go:172] (0xc0028bc000) (0xc00231c3c0) Stream removed, broadcasting: 5 Apr 25 21:47:08.475: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 25 21:47:08.475: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2291 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 21:47:08.475: INFO: >>> kubeConfig: /root/.kube/config I0425 21:47:08.509025 6 log.go:172] (0xc001ae6420) (0xc0020f6280) Create stream I0425 21:47:08.509062 6 log.go:172] (0xc001ae6420) (0xc0020f6280) Stream added, broadcasting: 1 I0425 21:47:08.510586 6 log.go:172] (0xc001ae6420) Reply frame received for 1 I0425 21:47:08.510625 6 log.go:172] (0xc001ae6420) (0xc001e58140) Create stream I0425 21:47:08.510646 6 log.go:172] (0xc001ae6420) (0xc001e58140) Stream added, broadcasting: 3 I0425 21:47:08.511365 6 log.go:172] (0xc001ae6420) Reply frame received for 3 I0425 21:47:08.511422 6 log.go:172] (0xc001ae6420) (0xc001d34000) Create stream I0425 21:47:08.511441 6 log.go:172] (0xc001ae6420) (0xc001d34000) Stream added, broadcasting: 5 I0425 21:47:08.512131 6 log.go:172] (0xc001ae6420) Reply frame received for 5 I0425 21:47:08.578683 6 log.go:172] (0xc001ae6420) Data frame received for 5 I0425 21:47:08.578724 6 log.go:172] (0xc001ae6420) Data frame received for 3 I0425 21:47:08.578769 6 log.go:172] (0xc001e58140) (3) Data frame handling I0425 21:47:08.578790 6 log.go:172] (0xc001e58140) (3) Data frame sent I0425 21:47:08.578805 6 log.go:172] (0xc001ae6420) Data frame received for 3 I0425 21:47:08.578824 6 log.go:172] (0xc001e58140) (3) Data frame handling I0425 21:47:08.578842 6 log.go:172] (0xc001d34000) (5) Data frame handling I0425 21:47:08.580470 6 log.go:172] (0xc001ae6420) Data frame received for 1 I0425 21:47:08.580495 6 log.go:172] (0xc0020f6280) (1) Data frame handling I0425 21:47:08.580525 6 log.go:172] (0xc0020f6280) (1) Data frame sent I0425 21:47:08.580558 6 log.go:172] (0xc001ae6420) (0xc0020f6280) Stream removed, broadcasting: 1 I0425 21:47:08.580647 6 log.go:172] (0xc001ae6420) (0xc0020f6280) Stream removed, broadcasting: 1 I0425 21:47:08.580675 6 log.go:172] (0xc001ae6420) (0xc001e58140) Stream removed, broadcasting: 3 I0425 21:47:08.580695 6 log.go:172] (0xc001ae6420) (0xc001d34000) Stream removed, broadcasting: 5 Apr 25 21:47:08.580: INFO: Exec stderr: "" Apr 25 21:47:08.580: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2291 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 21:47:08.580: INFO: >>> kubeConfig: /root/.kube/config I0425 21:47:08.581354 6 log.go:172] (0xc001ae6420) Go away received I0425 21:47:08.606547 6 log.go:172] (0xc002262210) (0xc001d34aa0) Create stream I0425 21:47:08.606599 6 log.go:172] (0xc002262210) (0xc001d34aa0) Stream added, broadcasting: 1 I0425 21:47:08.608166 6 log.go:172] (0xc002262210) Reply frame received for 1 I0425 21:47:08.608202 6 log.go:172] (0xc002262210) (0xc001e320a0) Create stream I0425 21:47:08.608215 6 log.go:172] (0xc002262210) (0xc001e320a0) Stream added, broadcasting: 3 I0425 21:47:08.609302 6 log.go:172] (0xc002262210) Reply frame received for 3 I0425 21:47:08.609352 6 log.go:172] (0xc002262210) (0xc0020f6320) Create stream I0425 21:47:08.609368 6 log.go:172] (0xc002262210) (0xc0020f6320) Stream added, broadcasting: 5 I0425 21:47:08.610489 6 log.go:172] (0xc002262210) Reply frame received for 5 I0425 21:47:08.676602 6 log.go:172] (0xc002262210) Data frame received for 5 I0425 21:47:08.676638 6 log.go:172] (0xc0020f6320) (5) Data frame handling I0425 21:47:08.676660 6 log.go:172] (0xc002262210) Data frame received for 3 I0425 21:47:08.676680 6 log.go:172] (0xc001e320a0) (3) Data frame handling I0425 21:47:08.676687 6 log.go:172] (0xc001e320a0) (3) Data frame sent I0425 21:47:08.677011 6 log.go:172] (0xc002262210) Data frame received for 3 I0425 21:47:08.677039 6 log.go:172] (0xc001e320a0) (3) Data frame handling I0425 21:47:08.678480 6 log.go:172] (0xc002262210) Data frame received for 1 I0425 21:47:08.678495 6 log.go:172] (0xc001d34aa0) (1) Data frame handling I0425 21:47:08.678505 6 log.go:172] (0xc001d34aa0) (1) Data frame sent I0425 21:47:08.678517 6 log.go:172] (0xc002262210) (0xc001d34aa0) Stream removed, broadcasting: 1 I0425 21:47:08.678608 6 log.go:172] (0xc002262210) (0xc001d34aa0) Stream removed, broadcasting: 1 I0425 21:47:08.678627 6 log.go:172] (0xc002262210) (0xc001e320a0) Stream removed, broadcasting: 3 I0425 21:47:08.678685 6 log.go:172] (0xc002262210) Go away received I0425 21:47:08.678747 6 log.go:172] (0xc002262210) (0xc0020f6320) Stream removed, broadcasting: 5 Apr 25 21:47:08.678: INFO: Exec stderr: "" Apr 25 21:47:08.678: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2291 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 21:47:08.678: INFO: >>> kubeConfig: /root/.kube/config I0425 21:47:08.711146 6 log.go:172] (0xc002262840) (0xc001d350e0) Create stream I0425 21:47:08.711186 6 log.go:172] (0xc002262840) (0xc001d350e0) Stream added, broadcasting: 1 I0425 21:47:08.713041 6 log.go:172] (0xc002262840) Reply frame received for 1 I0425 21:47:08.713089 6 log.go:172] (0xc002262840) (0xc0020f63c0) Create stream I0425 21:47:08.713108 6 log.go:172] (0xc002262840) (0xc0020f63c0) Stream added, broadcasting: 3 I0425 21:47:08.714339 6 log.go:172] (0xc002262840) Reply frame received for 3 I0425 21:47:08.714366 6 log.go:172] (0xc002262840) (0xc001e323c0) Create stream I0425 21:47:08.714385 6 log.go:172] (0xc002262840) (0xc001e323c0) Stream added, broadcasting: 5 I0425 21:47:08.715377 6 log.go:172] (0xc002262840) Reply frame received for 5 I0425 21:47:08.762050 6 log.go:172] (0xc002262840) Data frame received for 3 I0425 21:47:08.762156 6 log.go:172] (0xc0020f63c0) (3) Data frame handling I0425 21:47:08.762252 6 log.go:172] (0xc0020f63c0) (3) Data frame sent I0425 21:47:08.762325 6 log.go:172] (0xc002262840) Data frame received for 3 I0425 21:47:08.762393 6 log.go:172] (0xc0020f63c0) (3) Data frame handling I0425 21:47:08.762480 6 log.go:172] (0xc002262840) Data frame received for 5 I0425 21:47:08.762529 6 log.go:172] (0xc001e323c0) (5) Data frame handling I0425 21:47:08.764185 6 log.go:172] (0xc002262840) Data frame received for 1 I0425 21:47:08.764224 6 log.go:172] (0xc001d350e0) (1) Data frame handling I0425 21:47:08.764242 6 log.go:172] (0xc001d350e0) (1) Data frame sent I0425 21:47:08.764286 6 log.go:172] (0xc002262840) (0xc001d350e0) Stream removed, broadcasting: 1 I0425 21:47:08.764314 6 log.go:172] (0xc002262840) Go away received I0425 21:47:08.764460 6 log.go:172] (0xc002262840) (0xc001d350e0) Stream removed, broadcasting: 1 I0425 21:47:08.764486 6 log.go:172] (0xc002262840) (0xc0020f63c0) Stream removed, broadcasting: 3 I0425 21:47:08.764504 6 log.go:172] (0xc002262840) (0xc001e323c0) Stream removed, broadcasting: 5 Apr 25 21:47:08.764: INFO: Exec stderr: "" Apr 25 21:47:08.764: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2291 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 21:47:08.764: INFO: >>> kubeConfig: /root/.kube/config I0425 21:47:08.800935 6 log.go:172] (0xc001a58420) (0xc001e328c0) Create stream I0425 21:47:08.800962 6 log.go:172] (0xc001a58420) (0xc001e328c0) Stream added, broadcasting: 1 I0425 21:47:08.802765 6 log.go:172] (0xc001a58420) Reply frame received for 1 I0425 21:47:08.802805 6 log.go:172] (0xc001a58420) (0xc001e58460) Create stream I0425 21:47:08.802821 6 log.go:172] (0xc001a58420) (0xc001e58460) Stream added, broadcasting: 3 I0425 21:47:08.803940 6 log.go:172] (0xc001a58420) Reply frame received for 3 I0425 21:47:08.803977 6 log.go:172] (0xc001a58420) (0xc001e32960) Create stream I0425 21:47:08.803999 6 log.go:172] (0xc001a58420) (0xc001e32960) Stream added, broadcasting: 5 I0425 21:47:08.804903 6 log.go:172] (0xc001a58420) Reply frame received for 5 I0425 21:47:08.876365 6 log.go:172] (0xc001a58420) Data frame received for 3 I0425 21:47:08.876400 6 log.go:172] (0xc001e58460) (3) Data frame handling I0425 21:47:08.876415 6 log.go:172] (0xc001e58460) (3) Data frame sent I0425 21:47:08.876427 6 log.go:172] (0xc001a58420) Data frame received for 3 I0425 21:47:08.876451 6 log.go:172] (0xc001e58460) (3) Data frame handling I0425 21:47:08.876470 6 log.go:172] (0xc001a58420) Data frame received for 5 I0425 21:47:08.876479 6 log.go:172] (0xc001e32960) (5) Data frame handling I0425 21:47:08.878203 6 log.go:172] (0xc001a58420) Data frame received for 1 I0425 21:47:08.878224 6 log.go:172] (0xc001e328c0) (1) Data frame handling I0425 21:47:08.878250 6 log.go:172] (0xc001e328c0) (1) Data frame sent I0425 21:47:08.878524 6 log.go:172] (0xc001a58420) (0xc001e328c0) Stream removed, broadcasting: 1 I0425 21:47:08.878650 6 log.go:172] (0xc001a58420) (0xc001e328c0) Stream removed, broadcasting: 1 I0425 21:47:08.878683 6 log.go:172] (0xc001a58420) (0xc001e58460) Stream removed, broadcasting: 3 I0425 21:47:08.878703 6 log.go:172] (0xc001a58420) (0xc001e32960) Stream removed, broadcasting: 5 Apr 25 21:47:08.878: INFO: Exec stderr: "" I0425 21:47:08.878734 6 log.go:172] (0xc001a58420) Go away received [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:47:08.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2291" for this suite. • [SLOW TEST:11.205 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2742,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:47:08.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:47:09.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 25 21:47:09.147: INFO: stderr: "" Apr 25 21:47:09.147: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-04-05T10:48:13Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:47:09.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9324" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":159,"skipped":2804,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:47:09.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-ad90c84a-a3d7-47cd-8162-c72a44f89a41 STEP: Creating a pod to test consume secrets Apr 25 21:47:09.275: INFO: Waiting up to 5m0s for pod "pod-secrets-038eee6d-df83-4839-91c1-41ed221de700" in namespace "secrets-926" to be "success or failure" Apr 25 21:47:09.279: INFO: Pod "pod-secrets-038eee6d-df83-4839-91c1-41ed221de700": Phase="Pending", Reason="", readiness=false. Elapsed: 3.873049ms Apr 25 21:47:11.283: INFO: Pod "pod-secrets-038eee6d-df83-4839-91c1-41ed221de700": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008208351s Apr 25 21:47:13.287: INFO: Pod "pod-secrets-038eee6d-df83-4839-91c1-41ed221de700": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012008813s STEP: Saw pod success Apr 25 21:47:13.287: INFO: Pod "pod-secrets-038eee6d-df83-4839-91c1-41ed221de700" satisfied condition "success or failure" Apr 25 21:47:13.290: INFO: Trying to get logs from node jerma-worker pod pod-secrets-038eee6d-df83-4839-91c1-41ed221de700 container secret-env-test: STEP: delete the pod Apr 25 21:47:13.310: INFO: Waiting for pod pod-secrets-038eee6d-df83-4839-91c1-41ed221de700 to disappear Apr 25 21:47:13.314: INFO: Pod pod-secrets-038eee6d-df83-4839-91c1-41ed221de700 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:47:13.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-926" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":160,"skipped":2815,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:47:13.323: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command Apr 25 21:47:13.454: INFO: Waiting up to 5m0s for pod "client-containers-436327f0-247a-4a97-adcc-90a1fa29d790" in namespace "containers-3142" to be "success or failure" Apr 25 21:47:13.471: INFO: Pod "client-containers-436327f0-247a-4a97-adcc-90a1fa29d790": Phase="Pending", Reason="", readiness=false. Elapsed: 16.638323ms Apr 25 21:47:15.475: INFO: Pod "client-containers-436327f0-247a-4a97-adcc-90a1fa29d790": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020467971s Apr 25 21:47:17.479: INFO: Pod "client-containers-436327f0-247a-4a97-adcc-90a1fa29d790": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024603327s STEP: Saw pod success Apr 25 21:47:17.479: INFO: Pod "client-containers-436327f0-247a-4a97-adcc-90a1fa29d790" satisfied condition "success or failure" Apr 25 21:47:17.482: INFO: Trying to get logs from node jerma-worker2 pod client-containers-436327f0-247a-4a97-adcc-90a1fa29d790 container test-container: STEP: delete the pod Apr 25 21:47:17.496: INFO: Waiting for pod client-containers-436327f0-247a-4a97-adcc-90a1fa29d790 to disappear Apr 25 21:47:17.501: INFO: Pod client-containers-436327f0-247a-4a97-adcc-90a1fa29d790 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:47:17.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3142" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2835,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:47:17.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 25 21:47:25.618: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 25 21:47:25.626: INFO: Pod pod-with-prestop-exec-hook still exists Apr 25 21:47:27.627: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 25 21:47:27.631: INFO: Pod pod-with-prestop-exec-hook still exists Apr 25 21:47:29.627: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 25 21:47:29.631: INFO: Pod pod-with-prestop-exec-hook still exists Apr 25 21:47:31.627: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 25 21:47:31.631: INFO: Pod pod-with-prestop-exec-hook still exists Apr 25 21:47:33.627: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 25 21:47:33.630: INFO: Pod pod-with-prestop-exec-hook still exists Apr 25 21:47:35.627: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 25 21:47:35.630: INFO: Pod pod-with-prestop-exec-hook still exists Apr 25 21:47:37.627: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 25 21:47:37.630: INFO: Pod pod-with-prestop-exec-hook still exists Apr 25 21:47:39.627: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 25 21:47:39.630: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:47:39.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5581" for this suite. • [SLOW TEST:22.139 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2896,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:47:39.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 25 21:47:39.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-423' Apr 25 21:47:39.844: INFO: stderr: "" Apr 25 21:47:39.844: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 Apr 25 21:47:39.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-423' Apr 25 21:47:49.562: INFO: stderr: "" Apr 25 21:47:49.562: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:47:49.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-423" for this suite. • [SLOW TEST:9.923 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":163,"skipped":2897,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:47:49.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod Apr 25 21:47:49.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9532' Apr 25 21:47:50.075: INFO: stderr: "" Apr 25 21:47:50.075: INFO: stdout: "pod/pause created\n" Apr 25 21:47:50.075: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 25 21:47:50.075: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9532" to be "running and ready" Apr 25 21:47:50.099: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 23.633291ms Apr 25 21:47:52.194: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119399411s Apr 25 21:47:54.198: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.12350231s Apr 25 21:47:54.198: INFO: Pod "pause" satisfied condition "running and ready" Apr 25 21:47:54.199: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod Apr 25 21:47:54.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9532' Apr 25 21:47:54.312: INFO: stderr: "" Apr 25 21:47:54.312: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 25 21:47:54.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9532' Apr 25 21:47:54.410: INFO: stderr: "" Apr 25 21:47:54.410: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 25 21:47:54.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9532' Apr 25 21:47:54.512: INFO: stderr: "" Apr 25 21:47:54.512: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 25 21:47:54.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9532' Apr 25 21:47:54.626: INFO: stderr: "" Apr 25 21:47:54.626: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources Apr 25 21:47:54.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9532' Apr 25 21:47:54.777: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 25 21:47:54.777: INFO: stdout: "pod \"pause\" force deleted\n" Apr 25 21:47:54.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9532' Apr 25 21:47:54.897: INFO: stderr: "No resources found in kubectl-9532 namespace.\n" Apr 25 21:47:54.897: INFO: stdout: "" Apr 25 21:47:54.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9532 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 25 21:47:54.994: INFO: stderr: "" Apr 25 21:47:54.994: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:47:54.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9532" for this suite. • [SLOW TEST:5.463 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":164,"skipped":2899,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:47:55.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:47:55.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6552" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2933,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:47:55.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:47:55.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1817" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":166,"skipped":2946,"failed":0} SS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:47:55.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:47:59.821: INFO: Waiting up to 5m0s for pod "client-envvars-204a0783-a165-4f4e-8a18-28f200357a80" in namespace "pods-7124" to be "success or failure" Apr 25 21:47:59.859: INFO: Pod "client-envvars-204a0783-a165-4f4e-8a18-28f200357a80": Phase="Pending", Reason="", readiness=false. Elapsed: 37.792121ms Apr 25 21:48:01.863: INFO: Pod "client-envvars-204a0783-a165-4f4e-8a18-28f200357a80": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042150916s Apr 25 21:48:03.867: INFO: Pod "client-envvars-204a0783-a165-4f4e-8a18-28f200357a80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046275985s STEP: Saw pod success Apr 25 21:48:03.868: INFO: Pod "client-envvars-204a0783-a165-4f4e-8a18-28f200357a80" satisfied condition "success or failure" Apr 25 21:48:03.871: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-204a0783-a165-4f4e-8a18-28f200357a80 container env3cont: STEP: delete the pod Apr 25 21:48:03.905: INFO: Waiting for pod client-envvars-204a0783-a165-4f4e-8a18-28f200357a80 to disappear Apr 25 21:48:03.922: INFO: Pod client-envvars-204a0783-a165-4f4e-8a18-28f200357a80 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:48:03.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7124" for this suite. • [SLOW TEST:8.230 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":167,"skipped":2948,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:48:03.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:48:04.007: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:48:05.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2491" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":168,"skipped":2967,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:48:05.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:48:05.120: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 25 21:48:10.171: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 25 21:48:10.171: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 25 21:48:10.191: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-130 /apis/apps/v1/namespaces/deployment-130/deployments/test-cleanup-deployment 0450bf92-c90b-4dc1-9cb8-dd7f3f159d57 11026477 1 2020-04-25 21:48:10 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00516f548 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Apr 25 21:48:10.228: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-130 /apis/apps/v1/namespaces/deployment-130/replicasets/test-cleanup-deployment-55ffc6b7b6 8527f70f-0a5c-4d54-bdd4-d3ffbc0537d3 11026480 1 2020-04-25 21:48:10 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 0450bf92-c90b-4dc1-9cb8-dd7f3f159d57 0xc003b32947 0xc003b32948}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b329b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 25 21:48:10.228: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 25 21:48:10.228: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-130 /apis/apps/v1/namespaces/deployment-130/replicasets/test-cleanup-controller fe4bc477-8409-4006-8a32-10383bce91ed 11026479 1 2020-04-25 21:48:05 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 0450bf92-c90b-4dc1-9cb8-dd7f3f159d57 0xc003b32877 0xc003b32878}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003b328d8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 25 21:48:10.263: INFO: Pod "test-cleanup-controller-92nt5" is available: &Pod{ObjectMeta:{test-cleanup-controller-92nt5 test-cleanup-controller- deployment-130 /api/v1/namespaces/deployment-130/pods/test-cleanup-controller-92nt5 99bd4af7-875f-4d8c-bf3f-04416a144c41 11026451 0 2020-04-25 21:48:05 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller fe4bc477-8409-4006-8a32-10383bce91ed 0xc003b32df7 0xc003b32df8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9hf28,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9hf28,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9hf28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:48:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:48:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:48:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:48:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.45,StartTime:2020-04-25 21:48:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-04-25 21:48:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://dd2d5cf192c4f06b050e90fe8ecaae745f5975d9bac66919af8a2633284fe357,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.45,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 25 21:48:10.264: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-clqzr" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-clqzr test-cleanup-deployment-55ffc6b7b6- deployment-130 /api/v1/namespaces/deployment-130/pods/test-cleanup-deployment-55ffc6b7b6-clqzr ade46a92-0f0c-45a9-8ba9-7c3ab94db996 11026486 0 2020-04-25 21:48:10 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 8527f70f-0a5c-4d54-bdd4-d3ffbc0537d3 0xc003b32f87 0xc003b32f88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9hf28,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9hf28,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9hf28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 21:48:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:48:10.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-130" for this suite. • [SLOW TEST:5.300 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":169,"skipped":2974,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:48:10.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1393.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1393.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1393.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1393.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 25 21:48:16.649: INFO: DNS probes using dns-test-3f0827ff-004e-4231-ae85-f1d38b9e285a succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1393.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1393.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1393.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1393.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 25 21:48:22.743: INFO: File wheezy_udp@dns-test-service-3.dns-1393.svc.cluster.local from pod dns-1393/dns-test-3b08fc6e-df8a-4a6c-9f84-9ea1d3175817 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 21:48:22.761: INFO: File jessie_udp@dns-test-service-3.dns-1393.svc.cluster.local from pod dns-1393/dns-test-3b08fc6e-df8a-4a6c-9f84-9ea1d3175817 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 21:48:22.762: INFO: Lookups using dns-1393/dns-test-3b08fc6e-df8a-4a6c-9f84-9ea1d3175817 failed for: [wheezy_udp@dns-test-service-3.dns-1393.svc.cluster.local jessie_udp@dns-test-service-3.dns-1393.svc.cluster.local] Apr 25 21:48:27.766: INFO: File wheezy_udp@dns-test-service-3.dns-1393.svc.cluster.local from pod dns-1393/dns-test-3b08fc6e-df8a-4a6c-9f84-9ea1d3175817 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 21:48:27.770: INFO: File jessie_udp@dns-test-service-3.dns-1393.svc.cluster.local from pod dns-1393/dns-test-3b08fc6e-df8a-4a6c-9f84-9ea1d3175817 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 21:48:27.770: INFO: Lookups using dns-1393/dns-test-3b08fc6e-df8a-4a6c-9f84-9ea1d3175817 failed for: [wheezy_udp@dns-test-service-3.dns-1393.svc.cluster.local jessie_udp@dns-test-service-3.dns-1393.svc.cluster.local] Apr 25 21:48:32.766: INFO: File wheezy_udp@dns-test-service-3.dns-1393.svc.cluster.local from pod dns-1393/dns-test-3b08fc6e-df8a-4a6c-9f84-9ea1d3175817 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 21:48:32.769: INFO: File jessie_udp@dns-test-service-3.dns-1393.svc.cluster.local from pod dns-1393/dns-test-3b08fc6e-df8a-4a6c-9f84-9ea1d3175817 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 21:48:32.770: INFO: Lookups using dns-1393/dns-test-3b08fc6e-df8a-4a6c-9f84-9ea1d3175817 failed for: [wheezy_udp@dns-test-service-3.dns-1393.svc.cluster.local jessie_udp@dns-test-service-3.dns-1393.svc.cluster.local] Apr 25 21:48:37.766: INFO: File wheezy_udp@dns-test-service-3.dns-1393.svc.cluster.local from pod dns-1393/dns-test-3b08fc6e-df8a-4a6c-9f84-9ea1d3175817 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 21:48:37.770: INFO: File jessie_udp@dns-test-service-3.dns-1393.svc.cluster.local from pod dns-1393/dns-test-3b08fc6e-df8a-4a6c-9f84-9ea1d3175817 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 21:48:37.770: INFO: Lookups using dns-1393/dns-test-3b08fc6e-df8a-4a6c-9f84-9ea1d3175817 failed for: [wheezy_udp@dns-test-service-3.dns-1393.svc.cluster.local jessie_udp@dns-test-service-3.dns-1393.svc.cluster.local] Apr 25 21:48:42.766: INFO: File wheezy_udp@dns-test-service-3.dns-1393.svc.cluster.local from pod dns-1393/dns-test-3b08fc6e-df8a-4a6c-9f84-9ea1d3175817 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 21:48:42.770: INFO: File jessie_udp@dns-test-service-3.dns-1393.svc.cluster.local from pod dns-1393/dns-test-3b08fc6e-df8a-4a6c-9f84-9ea1d3175817 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 25 21:48:42.770: INFO: Lookups using dns-1393/dns-test-3b08fc6e-df8a-4a6c-9f84-9ea1d3175817 failed for: [wheezy_udp@dns-test-service-3.dns-1393.svc.cluster.local jessie_udp@dns-test-service-3.dns-1393.svc.cluster.local] Apr 25 21:48:47.771: INFO: DNS probes using dns-test-3b08fc6e-df8a-4a6c-9f84-9ea1d3175817 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1393.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1393.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1393.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1393.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 25 21:48:54.430: INFO: DNS probes using dns-test-240e1248-b4c0-4e0b-a61d-f90f2ae21838 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:48:54.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1393" for this suite. • [SLOW TEST:44.167 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":170,"skipped":2986,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:48:54.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-771009d8-ace4-42c4-8f83-1b1e0feb0eaa in namespace container-probe-9130 Apr 25 21:48:58.938: INFO: Started pod liveness-771009d8-ace4-42c4-8f83-1b1e0feb0eaa in namespace container-probe-9130 STEP: checking the pod's current state and verifying that restartCount is present Apr 25 21:48:58.940: INFO: Initial restart count of pod liveness-771009d8-ace4-42c4-8f83-1b1e0feb0eaa is 0 Apr 25 21:49:15.011: INFO: Restart count of pod container-probe-9130/liveness-771009d8-ace4-42c4-8f83-1b1e0feb0eaa is now 1 (16.070881396s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:49:15.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9130" for this suite. • [SLOW TEST:20.541 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2987,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:49:15.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller Apr 25 21:49:15.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8584' Apr 25 21:49:18.608: INFO: stderr: "" Apr 25 21:49:18.608: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 25 21:49:18.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8584' Apr 25 21:49:18.774: INFO: stderr: "" Apr 25 21:49:18.774: INFO: stdout: "update-demo-nautilus-9x2xv update-demo-nautilus-lkc6h " Apr 25 21:49:18.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9x2xv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8584' Apr 25 21:49:18.873: INFO: stderr: "" Apr 25 21:49:18.873: INFO: stdout: "" Apr 25 21:49:18.873: INFO: update-demo-nautilus-9x2xv is created but not running Apr 25 21:49:23.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8584' Apr 25 21:49:23.996: INFO: stderr: "" Apr 25 21:49:23.996: INFO: stdout: "update-demo-nautilus-9x2xv update-demo-nautilus-lkc6h " Apr 25 21:49:23.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9x2xv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8584' Apr 25 21:49:24.112: INFO: stderr: "" Apr 25 21:49:24.112: INFO: stdout: "true" Apr 25 21:49:24.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9x2xv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8584' Apr 25 21:49:24.216: INFO: stderr: "" Apr 25 21:49:24.216: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 25 21:49:24.216: INFO: validating pod update-demo-nautilus-9x2xv Apr 25 21:49:24.220: INFO: got data: { "image": "nautilus.jpg" } Apr 25 21:49:24.220: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 25 21:49:24.220: INFO: update-demo-nautilus-9x2xv is verified up and running Apr 25 21:49:24.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lkc6h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8584' Apr 25 21:49:24.311: INFO: stderr: "" Apr 25 21:49:24.311: INFO: stdout: "true" Apr 25 21:49:24.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lkc6h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8584' Apr 25 21:49:24.412: INFO: stderr: "" Apr 25 21:49:24.412: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 25 21:49:24.412: INFO: validating pod update-demo-nautilus-lkc6h Apr 25 21:49:24.417: INFO: got data: { "image": "nautilus.jpg" } Apr 25 21:49:24.417: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 25 21:49:24.417: INFO: update-demo-nautilus-lkc6h is verified up and running STEP: rolling-update to new replication controller Apr 25 21:49:24.419: INFO: scanned /root for discovery docs: Apr 25 21:49:24.419: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8584' Apr 25 21:49:47.024: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 25 21:49:47.024: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 25 21:49:47.025: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8584' Apr 25 21:49:47.121: INFO: stderr: "" Apr 25 21:49:47.121: INFO: stdout: "update-demo-kitten-6t5dd update-demo-kitten-tzfjh update-demo-nautilus-9x2xv " STEP: Replicas for name=update-demo: expected=2 actual=3 Apr 25 21:49:52.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8584' Apr 25 21:49:52.231: INFO: stderr: "" Apr 25 21:49:52.231: INFO: stdout: "update-demo-kitten-6t5dd update-demo-kitten-tzfjh " Apr 25 21:49:52.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6t5dd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8584' Apr 25 21:49:52.322: INFO: stderr: "" Apr 25 21:49:52.322: INFO: stdout: "true" Apr 25 21:49:52.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-6t5dd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8584' Apr 25 21:49:52.412: INFO: stderr: "" Apr 25 21:49:52.412: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 25 21:49:52.412: INFO: validating pod update-demo-kitten-6t5dd Apr 25 21:49:52.416: INFO: got data: { "image": "kitten.jpg" } Apr 25 21:49:52.416: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 25 21:49:52.416: INFO: update-demo-kitten-6t5dd is verified up and running Apr 25 21:49:52.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tzfjh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8584' Apr 25 21:49:52.508: INFO: stderr: "" Apr 25 21:49:52.508: INFO: stdout: "true" Apr 25 21:49:52.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tzfjh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8584' Apr 25 21:49:52.602: INFO: stderr: "" Apr 25 21:49:52.602: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 25 21:49:52.602: INFO: validating pod update-demo-kitten-tzfjh Apr 25 21:49:52.607: INFO: got data: { "image": "kitten.jpg" } Apr 25 21:49:52.607: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 25 21:49:52.607: INFO: update-demo-kitten-tzfjh is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:49:52.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8584" for this suite. • [SLOW TEST:37.552 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":172,"skipped":2990,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:49:52.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0425 21:50:23.256763 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 25 21:50:23.256: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:50:23.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2554" for this suite. • [SLOW TEST:30.650 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":173,"skipped":3002,"failed":0} S ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:50:23.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9887, will wait for the garbage collector to delete the pods Apr 25 21:50:27.464: INFO: Deleting Job.batch foo took: 6.159263ms Apr 25 21:50:27.864: INFO: Terminating Job.batch foo pods took: 400.25493ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:51:09.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9887" for this suite. • [SLOW TEST:46.313 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":174,"skipped":3003,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:51:09.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 25 21:51:09.688: INFO: Waiting up to 5m0s for pod "pod-6004f593-67ff-437d-a604-26aa42839987" in namespace "emptydir-1618" to be "success or failure" Apr 25 21:51:09.709: INFO: Pod "pod-6004f593-67ff-437d-a604-26aa42839987": Phase="Pending", Reason="", readiness=false. Elapsed: 21.262885ms Apr 25 21:51:11.714: INFO: Pod "pod-6004f593-67ff-437d-a604-26aa42839987": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025784159s Apr 25 21:51:13.747: INFO: Pod "pod-6004f593-67ff-437d-a604-26aa42839987": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058443906s STEP: Saw pod success Apr 25 21:51:13.747: INFO: Pod "pod-6004f593-67ff-437d-a604-26aa42839987" satisfied condition "success or failure" Apr 25 21:51:13.750: INFO: Trying to get logs from node jerma-worker2 pod pod-6004f593-67ff-437d-a604-26aa42839987 container test-container: STEP: delete the pod Apr 25 21:51:13.820: INFO: Waiting for pod pod-6004f593-67ff-437d-a604-26aa42839987 to disappear Apr 25 21:51:13.867: INFO: Pod pod-6004f593-67ff-437d-a604-26aa42839987 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:51:13.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1618" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":3026,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:51:13.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 21:51:14.614: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 21:51:16.623: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448274, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448274, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448274, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448274, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 21:51:19.661: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:51:19.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-387-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:51:20.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3523" for this suite. STEP: Destroying namespace "webhook-3523-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.055 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":176,"skipped":3030,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:51:20.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:51:32.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6976" for this suite. • [SLOW TEST:11.166 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":177,"skipped":3035,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:51:32.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container Apr 25 21:51:36.709: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1280 pod-service-account-2338baa6-2c62-4b32-b5e4-25ffebf46b9d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 25 21:51:36.950: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1280 pod-service-account-2338baa6-2c62-4b32-b5e4-25ffebf46b9d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 25 21:51:37.135: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1280 pod-service-account-2338baa6-2c62-4b32-b5e4-25ffebf46b9d -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:51:37.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1280" for this suite. • [SLOW TEST:5.265 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":178,"skipped":3037,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:51:37.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 21:51:37.854: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 21:51:39.865: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448297, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448297, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448297, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448297, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 21:51:42.919: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:51:43.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5356" for this suite. STEP: Destroying namespace "webhook-5356-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.769 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":179,"skipped":3038,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:51:43.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:51:43.233: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Apr 25 21:51:46.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6044 create -f -' Apr 25 21:51:49.149: INFO: stderr: "" Apr 25 21:51:49.149: INFO: stdout: "e2e-test-crd-publish-openapi-4707-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 25 21:51:49.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6044 delete e2e-test-crd-publish-openapi-4707-crds test-foo' Apr 25 21:51:49.254: INFO: stderr: "" Apr 25 21:51:49.254: INFO: stdout: "e2e-test-crd-publish-openapi-4707-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 25 21:51:49.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6044 apply -f -' Apr 25 21:51:49.534: INFO: stderr: "" Apr 25 21:51:49.534: INFO: stdout: "e2e-test-crd-publish-openapi-4707-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 25 21:51:49.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6044 delete e2e-test-crd-publish-openapi-4707-crds test-foo' Apr 25 21:51:49.763: INFO: stderr: "" Apr 25 21:51:49.763: INFO: stdout: "e2e-test-crd-publish-openapi-4707-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 25 21:51:49.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6044 create -f -' Apr 25 21:51:50.107: INFO: rc: 1 Apr 25 21:51:50.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6044 apply -f -' Apr 25 21:51:50.411: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Apr 25 21:51:50.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6044 create -f -' Apr 25 21:51:50.736: INFO: rc: 1 Apr 25 21:51:50.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6044 apply -f -' Apr 25 21:51:51.002: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Apr 25 21:51:51.002: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4707-crds' Apr 25 21:51:51.286: INFO: stderr: "" Apr 25 21:51:51.286: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4707-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Apr 25 21:51:51.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4707-crds.metadata' Apr 25 21:51:51.551: INFO: stderr: "" Apr 25 21:51:51.551: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4707-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 25 21:51:51.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4707-crds.spec' Apr 25 21:51:51.819: INFO: stderr: "" Apr 25 21:51:51.819: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4707-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 25 21:51:51.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4707-crds.spec.bars' Apr 25 21:51:52.095: INFO: stderr: "" Apr 25 21:51:52.095: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4707-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Apr 25 21:51:52.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4707-crds.spec.bars2' Apr 25 21:51:52.358: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:51:54.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-6044" for this suite. • [SLOW TEST:11.166 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":180,"skipped":3060,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:51:54.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Apr 25 21:51:58.908: INFO: Successfully updated pod "adopt-release-mnvxj" STEP: Checking that the Job readopts the Pod Apr 25 21:51:58.908: INFO: Waiting up to 15m0s for pod "adopt-release-mnvxj" in namespace "job-6721" to be "adopted" Apr 25 21:51:58.927: INFO: Pod "adopt-release-mnvxj": Phase="Running", Reason="", readiness=true. Elapsed: 18.576777ms Apr 25 21:52:00.931: INFO: Pod "adopt-release-mnvxj": Phase="Running", Reason="", readiness=true. Elapsed: 2.022760069s Apr 25 21:52:00.931: INFO: Pod "adopt-release-mnvxj" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Apr 25 21:52:01.440: INFO: Successfully updated pod "adopt-release-mnvxj" STEP: Checking that the Job releases the Pod Apr 25 21:52:01.440: INFO: Waiting up to 15m0s for pod "adopt-release-mnvxj" in namespace "job-6721" to be "released" Apr 25 21:52:01.448: INFO: Pod "adopt-release-mnvxj": Phase="Running", Reason="", readiness=true. Elapsed: 7.434006ms Apr 25 21:52:03.452: INFO: Pod "adopt-release-mnvxj": Phase="Running", Reason="", readiness=true. Elapsed: 2.011677796s Apr 25 21:52:03.452: INFO: Pod "adopt-release-mnvxj" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:52:03.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6721" for this suite. • [SLOW TEST:9.148 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":181,"skipped":3091,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:52:03.464: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium Apr 25 21:52:03.586: INFO: Waiting up to 5m0s for pod "pod-818c87dd-b4d5-4d7f-b9fe-3011fe7dacbe" in namespace "emptydir-2607" to be "success or failure" Apr 25 21:52:03.591: INFO: Pod "pod-818c87dd-b4d5-4d7f-b9fe-3011fe7dacbe": Phase="Pending", Reason="", readiness=false. Elapsed: 5.031545ms Apr 25 21:52:05.595: INFO: Pod "pod-818c87dd-b4d5-4d7f-b9fe-3011fe7dacbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00901323s Apr 25 21:52:07.600: INFO: Pod "pod-818c87dd-b4d5-4d7f-b9fe-3011fe7dacbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013440666s STEP: Saw pod success Apr 25 21:52:07.600: INFO: Pod "pod-818c87dd-b4d5-4d7f-b9fe-3011fe7dacbe" satisfied condition "success or failure" Apr 25 21:52:07.603: INFO: Trying to get logs from node jerma-worker2 pod pod-818c87dd-b4d5-4d7f-b9fe-3011fe7dacbe container test-container: STEP: delete the pod Apr 25 21:52:07.657: INFO: Waiting for pod pod-818c87dd-b4d5-4d7f-b9fe-3011fe7dacbe to disappear Apr 25 21:52:07.736: INFO: Pod pod-818c87dd-b4d5-4d7f-b9fe-3011fe7dacbe no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:52:07.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2607" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":3105,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:52:07.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Apr 25 21:52:07.809: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Apr 25 21:52:19.399: INFO: >>> kubeConfig: /root/.kube/config Apr 25 21:52:21.355: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:52:31.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-858" for this suite. • [SLOW TEST:24.202 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":183,"skipped":3106,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:52:31.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:52:32.019: INFO: Waiting up to 5m0s for pod "busybox-user-65534-d4ea88fb-96e9-4192-a573-150a78809250" in namespace "security-context-test-3067" to be "success or failure" Apr 25 21:52:32.023: INFO: Pod "busybox-user-65534-d4ea88fb-96e9-4192-a573-150a78809250": Phase="Pending", Reason="", readiness=false. Elapsed: 3.477131ms Apr 25 21:52:34.027: INFO: Pod "busybox-user-65534-d4ea88fb-96e9-4192-a573-150a78809250": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007380015s Apr 25 21:52:36.031: INFO: Pod "busybox-user-65534-d4ea88fb-96e9-4192-a573-150a78809250": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011471785s Apr 25 21:52:36.031: INFO: Pod "busybox-user-65534-d4ea88fb-96e9-4192-a573-150a78809250" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:52:36.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3067" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":3115,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:52:36.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-31b71b53-9c55-47c5-903a-d0b0d375d6c5 STEP: Creating a pod to test consume configMaps Apr 25 21:52:36.128: INFO: Waiting up to 5m0s for pod "pod-configmaps-7a4985fa-9496-48a0-a0cc-d22a5a016f9f" in namespace "configmap-7220" to be "success or failure" Apr 25 21:52:36.132: INFO: Pod "pod-configmaps-7a4985fa-9496-48a0-a0cc-d22a5a016f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.25552ms Apr 25 21:52:38.136: INFO: Pod "pod-configmaps-7a4985fa-9496-48a0-a0cc-d22a5a016f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008314497s Apr 25 21:52:40.141: INFO: Pod "pod-configmaps-7a4985fa-9496-48a0-a0cc-d22a5a016f9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013034811s STEP: Saw pod success Apr 25 21:52:40.141: INFO: Pod "pod-configmaps-7a4985fa-9496-48a0-a0cc-d22a5a016f9f" satisfied condition "success or failure" Apr 25 21:52:40.144: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-7a4985fa-9496-48a0-a0cc-d22a5a016f9f container configmap-volume-test: STEP: delete the pod Apr 25 21:52:40.163: INFO: Waiting for pod pod-configmaps-7a4985fa-9496-48a0-a0cc-d22a5a016f9f to disappear Apr 25 21:52:40.168: INFO: Pod pod-configmaps-7a4985fa-9496-48a0-a0cc-d22a5a016f9f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:52:40.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7220" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3131,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:52:40.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:52:40.266: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:52:41.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3799" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":186,"skipped":3137,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:52:41.477: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 21:52:42.096: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 21:52:44.144: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448362, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448362, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448362, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448362, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 21:52:46.149: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448362, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448362, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448362, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448362, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 21:52:49.199: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:52:59.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5271" for this suite. STEP: Destroying namespace "webhook-5271-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.079 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":187,"skipped":3153,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:52:59.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:53:04.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5773" for this suite. • [SLOW TEST:5.161 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":188,"skipped":3206,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:53:04.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 21:53:05.234: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 21:53:07.246: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448385, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448385, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448385, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448385, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 21:53:10.265: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:53:10.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6263-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:53:11.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-627" for this suite. STEP: Destroying namespace "webhook-627-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.816 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":189,"skipped":3230,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:53:11.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 25 21:53:11.697: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 25 21:53:11.709: INFO: Waiting for terminating namespaces to be deleted... Apr 25 21:53:11.711: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 25 21:53:11.727: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 21:53:11.727: INFO: Container kindnet-cni ready: true, restart count 0 Apr 25 21:53:11.727: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 21:53:11.727: INFO: Container kube-proxy ready: true, restart count 0 Apr 25 21:53:11.727: INFO: pod-adoption from replication-controller-5773 started at 2020-04-25 21:52:59 +0000 UTC (1 container statuses recorded) Apr 25 21:53:11.727: INFO: Container pod-adoption ready: false, restart count 0 Apr 25 21:53:11.727: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 25 21:53:11.733: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 21:53:11.733: INFO: Container kindnet-cni ready: true, restart count 0 Apr 25 21:53:11.733: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 25 21:53:11.733: INFO: Container kube-bench ready: false, restart count 0 Apr 25 21:53:11.733: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 21:53:11.733: INFO: Container kube-proxy ready: true, restart count 0 Apr 25 21:53:11.733: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 25 21:53:11.733: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-845d32b1-f94e-4890-8619-73ebd763b50d 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-845d32b1-f94e-4890-8619-73ebd763b50d off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-845d32b1-f94e-4890-8619-73ebd763b50d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:58:19.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8247" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.468 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":190,"skipped":3248,"failed":0} SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:58:20.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 25 21:58:20.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6997' Apr 25 21:58:20.147: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 25 21:58:20.147: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created Apr 25 21:58:20.152: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Apr 25 21:58:20.170: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 25 21:58:20.189: INFO: scanned /root for discovery docs: Apr 25 21:58:20.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6997' Apr 25 21:58:36.073: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 25 21:58:36.073: INFO: stdout: "Created e2e-test-httpd-rc-b217dbc32cbd0ae73f31d0ef8ae61a0c\nScaling up e2e-test-httpd-rc-b217dbc32cbd0ae73f31d0ef8ae61a0c from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-b217dbc32cbd0ae73f31d0ef8ae61a0c up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-b217dbc32cbd0ae73f31d0ef8ae61a0c to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" Apr 25 21:58:36.073: INFO: stdout: "Created e2e-test-httpd-rc-b217dbc32cbd0ae73f31d0ef8ae61a0c\nScaling up e2e-test-httpd-rc-b217dbc32cbd0ae73f31d0ef8ae61a0c from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-b217dbc32cbd0ae73f31d0ef8ae61a0c up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-b217dbc32cbd0ae73f31d0ef8ae61a0c to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. Apr 25 21:58:36.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-6997' Apr 25 21:58:36.171: INFO: stderr: "" Apr 25 21:58:36.171: INFO: stdout: "e2e-test-httpd-rc-b217dbc32cbd0ae73f31d0ef8ae61a0c-8mflq " Apr 25 21:58:36.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-b217dbc32cbd0ae73f31d0ef8ae61a0c-8mflq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6997' Apr 25 21:58:36.279: INFO: stderr: "" Apr 25 21:58:36.279: INFO: stdout: "true" Apr 25 21:58:36.279: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-b217dbc32cbd0ae73f31d0ef8ae61a0c-8mflq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6997' Apr 25 21:58:36.390: INFO: stderr: "" Apr 25 21:58:36.390: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" Apr 25 21:58:36.390: INFO: e2e-test-httpd-rc-b217dbc32cbd0ae73f31d0ef8ae61a0c-8mflq is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 Apr 25 21:58:36.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6997' Apr 25 21:58:36.526: INFO: stderr: "" Apr 25 21:58:36.526: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:58:36.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6997" for this suite. • [SLOW TEST:16.555 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":191,"skipped":3258,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:58:36.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 25 21:58:36.614: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a423fd69-149a-43ad-b70d-2e41083e8a84" in namespace "downward-api-2998" to be "success or failure" Apr 25 21:58:36.617: INFO: Pod "downwardapi-volume-a423fd69-149a-43ad-b70d-2e41083e8a84": Phase="Pending", Reason="", readiness=false. Elapsed: 3.363763ms Apr 25 21:58:38.622: INFO: Pod "downwardapi-volume-a423fd69-149a-43ad-b70d-2e41083e8a84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008026815s Apr 25 21:58:40.626: INFO: Pod "downwardapi-volume-a423fd69-149a-43ad-b70d-2e41083e8a84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012498374s STEP: Saw pod success Apr 25 21:58:40.626: INFO: Pod "downwardapi-volume-a423fd69-149a-43ad-b70d-2e41083e8a84" satisfied condition "success or failure" Apr 25 21:58:40.630: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-a423fd69-149a-43ad-b70d-2e41083e8a84 container client-container: STEP: delete the pod Apr 25 21:58:40.682: INFO: Waiting for pod downwardapi-volume-a423fd69-149a-43ad-b70d-2e41083e8a84 to disappear Apr 25 21:58:40.698: INFO: Pod downwardapi-volume-a423fd69-149a-43ad-b70d-2e41083e8a84 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:58:40.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2998" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3274,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:58:40.707: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1872 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-1872 I0425 21:58:40.896085 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-1872, replica count: 2 I0425 21:58:43.946537 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0425 21:58:46.946769 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 25 21:58:46.946: INFO: Creating new exec pod Apr 25 21:58:51.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1872 execpodrh2d8 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Apr 25 21:58:52.209: INFO: stderr: "I0425 21:58:52.105358 2672 log.go:172] (0xc0000f4a50) (0xc0007e6000) Create stream\nI0425 21:58:52.105415 2672 log.go:172] (0xc0000f4a50) (0xc0007e6000) Stream added, broadcasting: 1\nI0425 21:58:52.107710 2672 log.go:172] (0xc0000f4a50) Reply frame received for 1\nI0425 21:58:52.107783 2672 log.go:172] (0xc0000f4a50) (0xc0009e8000) Create stream\nI0425 21:58:52.107808 2672 log.go:172] (0xc0000f4a50) (0xc0009e8000) Stream added, broadcasting: 3\nI0425 21:58:52.108856 2672 log.go:172] (0xc0000f4a50) Reply frame received for 3\nI0425 21:58:52.108886 2672 log.go:172] (0xc0000f4a50) (0xc0005c7ae0) Create stream\nI0425 21:58:52.108899 2672 log.go:172] (0xc0000f4a50) (0xc0005c7ae0) Stream added, broadcasting: 5\nI0425 21:58:52.110041 2672 log.go:172] (0xc0000f4a50) Reply frame received for 5\nI0425 21:58:52.202044 2672 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0425 21:58:52.202079 2672 log.go:172] (0xc0005c7ae0) (5) Data frame handling\nI0425 21:58:52.202097 2672 log.go:172] (0xc0005c7ae0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0425 21:58:52.202315 2672 log.go:172] (0xc0000f4a50) Data frame received for 5\nI0425 21:58:52.202343 2672 log.go:172] (0xc0005c7ae0) (5) Data frame handling\nI0425 21:58:52.202525 2672 log.go:172] (0xc0000f4a50) Data frame received for 3\nI0425 21:58:52.202542 2672 log.go:172] (0xc0009e8000) (3) Data frame handling\nI0425 21:58:52.204593 2672 log.go:172] (0xc0000f4a50) Data frame received for 1\nI0425 21:58:52.204618 2672 log.go:172] (0xc0007e6000) (1) Data frame handling\nI0425 21:58:52.204631 2672 log.go:172] (0xc0007e6000) (1) Data frame sent\nI0425 21:58:52.204641 2672 log.go:172] (0xc0000f4a50) (0xc0007e6000) Stream removed, broadcasting: 1\nI0425 21:58:52.204654 2672 log.go:172] (0xc0000f4a50) Go away received\nI0425 21:58:52.205056 2672 log.go:172] (0xc0000f4a50) (0xc0007e6000) Stream removed, broadcasting: 1\nI0425 21:58:52.205074 2672 log.go:172] (0xc0000f4a50) (0xc0009e8000) Stream removed, broadcasting: 3\nI0425 21:58:52.205084 2672 log.go:172] (0xc0000f4a50) (0xc0005c7ae0) Stream removed, broadcasting: 5\n" Apr 25 21:58:52.210: INFO: stdout: "" Apr 25 21:58:52.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1872 execpodrh2d8 -- /bin/sh -x -c nc -zv -t -w 2 10.108.25.91 80' Apr 25 21:58:52.436: INFO: stderr: "I0425 21:58:52.349316 2692 log.go:172] (0xc000a9b6b0) (0xc00097e960) Create stream\nI0425 21:58:52.349369 2692 log.go:172] (0xc000a9b6b0) (0xc00097e960) Stream added, broadcasting: 1\nI0425 21:58:52.354510 2692 log.go:172] (0xc000a9b6b0) Reply frame received for 1\nI0425 21:58:52.354568 2692 log.go:172] (0xc000a9b6b0) (0xc0007046e0) Create stream\nI0425 21:58:52.354586 2692 log.go:172] (0xc000a9b6b0) (0xc0007046e0) Stream added, broadcasting: 3\nI0425 21:58:52.355487 2692 log.go:172] (0xc000a9b6b0) Reply frame received for 3\nI0425 21:58:52.355532 2692 log.go:172] (0xc000a9b6b0) (0xc00097e000) Create stream\nI0425 21:58:52.355545 2692 log.go:172] (0xc000a9b6b0) (0xc00097e000) Stream added, broadcasting: 5\nI0425 21:58:52.356449 2692 log.go:172] (0xc000a9b6b0) Reply frame received for 5\nI0425 21:58:52.430699 2692 log.go:172] (0xc000a9b6b0) Data frame received for 3\nI0425 21:58:52.430747 2692 log.go:172] (0xc0007046e0) (3) Data frame handling\nI0425 21:58:52.430802 2692 log.go:172] (0xc000a9b6b0) Data frame received for 5\nI0425 21:58:52.430843 2692 log.go:172] (0xc00097e000) (5) Data frame handling\nI0425 21:58:52.430855 2692 log.go:172] (0xc00097e000) (5) Data frame sent\nI0425 21:58:52.430862 2692 log.go:172] (0xc000a9b6b0) Data frame received for 5\nI0425 21:58:52.430867 2692 log.go:172] (0xc00097e000) (5) Data frame handling\n+ nc -zv -t -w 2 10.108.25.91 80\nConnection to 10.108.25.91 80 port [tcp/http] succeeded!\nI0425 21:58:52.432417 2692 log.go:172] (0xc000a9b6b0) Data frame received for 1\nI0425 21:58:52.432434 2692 log.go:172] (0xc00097e960) (1) Data frame handling\nI0425 21:58:52.432444 2692 log.go:172] (0xc00097e960) (1) Data frame sent\nI0425 21:58:52.432456 2692 log.go:172] (0xc000a9b6b0) (0xc00097e960) Stream removed, broadcasting: 1\nI0425 21:58:52.432471 2692 log.go:172] (0xc000a9b6b0) Go away received\nI0425 21:58:52.432954 2692 log.go:172] (0xc000a9b6b0) (0xc00097e960) Stream removed, broadcasting: 1\nI0425 21:58:52.432990 2692 log.go:172] (0xc000a9b6b0) (0xc0007046e0) Stream removed, broadcasting: 3\nI0425 21:58:52.433012 2692 log.go:172] (0xc000a9b6b0) (0xc00097e000) Stream removed, broadcasting: 5\n" Apr 25 21:58:52.436: INFO: stdout: "" Apr 25 21:58:52.437: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:58:52.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1872" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.782 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":193,"skipped":3284,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:58:52.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Apr 25 21:58:52.594: INFO: Created pod &Pod{ObjectMeta:{dns-8841 dns-8841 /api/v1/namespaces/dns-8841/pods/dns-8841 384dad17-b7bf-4dd7-bf33-54dc8811abe0 11029612 0 2020-04-25 21:58:52 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b9vt5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b9vt5,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b9vt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... Apr 25 21:58:56.610: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8841 PodName:dns-8841 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 21:58:56.610: INFO: >>> kubeConfig: /root/.kube/config I0425 21:58:56.651218 6 log.go:172] (0xc002262790) (0xc0020f7f40) Create stream I0425 21:58:56.651257 6 log.go:172] (0xc002262790) (0xc0020f7f40) Stream added, broadcasting: 1 I0425 21:58:56.653408 6 log.go:172] (0xc002262790) Reply frame received for 1 I0425 21:58:56.653480 6 log.go:172] (0xc002262790) (0xc001d9b4a0) Create stream I0425 21:58:56.653519 6 log.go:172] (0xc002262790) (0xc001d9b4a0) Stream added, broadcasting: 3 I0425 21:58:56.654985 6 log.go:172] (0xc002262790) Reply frame received for 3 I0425 21:58:56.655032 6 log.go:172] (0xc002262790) (0xc002168000) Create stream I0425 21:58:56.655045 6 log.go:172] (0xc002262790) (0xc002168000) Stream added, broadcasting: 5 I0425 21:58:56.656225 6 log.go:172] (0xc002262790) Reply frame received for 5 I0425 21:58:56.748615 6 log.go:172] (0xc002262790) Data frame received for 3 I0425 21:58:56.748640 6 log.go:172] (0xc001d9b4a0) (3) Data frame handling I0425 21:58:56.748654 6 log.go:172] (0xc001d9b4a0) (3) Data frame sent I0425 21:58:56.749515 6 log.go:172] (0xc002262790) Data frame received for 5 I0425 21:58:56.749527 6 log.go:172] (0xc002168000) (5) Data frame handling I0425 21:58:56.749570 6 log.go:172] (0xc002262790) Data frame received for 3 I0425 21:58:56.749586 6 log.go:172] (0xc001d9b4a0) (3) Data frame handling I0425 21:58:56.751135 6 log.go:172] (0xc002262790) Data frame received for 1 I0425 21:58:56.751148 6 log.go:172] (0xc0020f7f40) (1) Data frame handling I0425 21:58:56.751159 6 log.go:172] (0xc0020f7f40) (1) Data frame sent I0425 21:58:56.751166 6 log.go:172] (0xc002262790) (0xc0020f7f40) Stream removed, broadcasting: 1 I0425 21:58:56.751232 6 log.go:172] (0xc002262790) (0xc0020f7f40) Stream removed, broadcasting: 1 I0425 21:58:56.751246 6 log.go:172] (0xc002262790) (0xc001d9b4a0) Stream removed, broadcasting: 3 I0425 21:58:56.751280 6 log.go:172] (0xc002262790) Go away received I0425 21:58:56.751379 6 log.go:172] (0xc002262790) (0xc002168000) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Apr 25 21:58:56.751: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8841 PodName:dns-8841 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 21:58:56.751: INFO: >>> kubeConfig: /root/.kube/config I0425 21:58:56.780438 6 log.go:172] (0xc0020ffb80) (0xc001e32be0) Create stream I0425 21:58:56.780470 6 log.go:172] (0xc0020ffb80) (0xc001e32be0) Stream added, broadcasting: 1 I0425 21:58:56.782519 6 log.go:172] (0xc0020ffb80) Reply frame received for 1 I0425 21:58:56.782585 6 log.go:172] (0xc0020ffb80) (0xc002723e00) Create stream I0425 21:58:56.782620 6 log.go:172] (0xc0020ffb80) (0xc002723e00) Stream added, broadcasting: 3 I0425 21:58:56.783482 6 log.go:172] (0xc0020ffb80) Reply frame received for 3 I0425 21:58:56.783529 6 log.go:172] (0xc0020ffb80) (0xc00279a640) Create stream I0425 21:58:56.783545 6 log.go:172] (0xc0020ffb80) (0xc00279a640) Stream added, broadcasting: 5 I0425 21:58:56.784417 6 log.go:172] (0xc0020ffb80) Reply frame received for 5 I0425 21:58:56.868139 6 log.go:172] (0xc0020ffb80) Data frame received for 3 I0425 21:58:56.868169 6 log.go:172] (0xc002723e00) (3) Data frame handling I0425 21:58:56.868185 6 log.go:172] (0xc002723e00) (3) Data frame sent I0425 21:58:56.869063 6 log.go:172] (0xc0020ffb80) Data frame received for 3 I0425 21:58:56.869086 6 log.go:172] (0xc002723e00) (3) Data frame handling I0425 21:58:56.869435 6 log.go:172] (0xc0020ffb80) Data frame received for 5 I0425 21:58:56.869453 6 log.go:172] (0xc00279a640) (5) Data frame handling I0425 21:58:56.870956 6 log.go:172] (0xc0020ffb80) Data frame received for 1 I0425 21:58:56.870981 6 log.go:172] (0xc001e32be0) (1) Data frame handling I0425 21:58:56.871000 6 log.go:172] (0xc001e32be0) (1) Data frame sent I0425 21:58:56.871017 6 log.go:172] (0xc0020ffb80) (0xc001e32be0) Stream removed, broadcasting: 1 I0425 21:58:56.871032 6 log.go:172] (0xc0020ffb80) Go away received I0425 21:58:56.871186 6 log.go:172] (0xc0020ffb80) (0xc001e32be0) Stream removed, broadcasting: 1 I0425 21:58:56.871216 6 log.go:172] (0xc0020ffb80) (0xc002723e00) Stream removed, broadcasting: 3 I0425 21:58:56.871226 6 log.go:172] (0xc0020ffb80) (0xc00279a640) Stream removed, broadcasting: 5 Apr 25 21:58:56.871: INFO: Deleting pod dns-8841... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:58:56.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8841" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":194,"skipped":3310,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:58:56.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 25 21:59:05.180: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 25 21:59:05.198: INFO: Pod pod-with-poststart-http-hook still exists Apr 25 21:59:07.198: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 25 21:59:07.211: INFO: Pod pod-with-poststart-http-hook still exists Apr 25 21:59:09.198: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 25 21:59:09.202: INFO: Pod pod-with-poststart-http-hook still exists Apr 25 21:59:11.198: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 25 21:59:11.202: INFO: Pod pod-with-poststart-http-hook still exists Apr 25 21:59:13.198: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 25 21:59:13.203: INFO: Pod pod-with-poststart-http-hook still exists Apr 25 21:59:15.198: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 25 21:59:15.202: INFO: Pod pod-with-poststart-http-hook still exists Apr 25 21:59:17.198: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 25 21:59:17.202: INFO: Pod pod-with-poststart-http-hook still exists Apr 25 21:59:19.198: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 25 21:59:19.202: INFO: Pod pod-with-poststart-http-hook still exists Apr 25 21:59:21.198: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 25 21:59:21.202: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:59:21.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6484" for this suite. • [SLOW TEST:24.280 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3325,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:59:21.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 21:59:21.291: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:59:25.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5554" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3344,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:59:25.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 25 21:59:25.531: INFO: Waiting up to 5m0s for pod "downwardapi-volume-83734bfb-3795-48df-8ce6-dfac95f04984" in namespace "projected-5378" to be "success or failure" Apr 25 21:59:25.534: INFO: Pod "downwardapi-volume-83734bfb-3795-48df-8ce6-dfac95f04984": Phase="Pending", Reason="", readiness=false. Elapsed: 3.622516ms Apr 25 21:59:27.538: INFO: Pod "downwardapi-volume-83734bfb-3795-48df-8ce6-dfac95f04984": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007198475s Apr 25 21:59:29.543: INFO: Pod "downwardapi-volume-83734bfb-3795-48df-8ce6-dfac95f04984": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012798386s STEP: Saw pod success Apr 25 21:59:29.543: INFO: Pod "downwardapi-volume-83734bfb-3795-48df-8ce6-dfac95f04984" satisfied condition "success or failure" Apr 25 21:59:29.546: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-83734bfb-3795-48df-8ce6-dfac95f04984 container client-container: STEP: delete the pod Apr 25 21:59:29.632: INFO: Waiting for pod downwardapi-volume-83734bfb-3795-48df-8ce6-dfac95f04984 to disappear Apr 25 21:59:29.642: INFO: Pod downwardapi-volume-83734bfb-3795-48df-8ce6-dfac95f04984 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:59:29.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5378" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3357,"failed":0} ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:59:29.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:59:42.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4962" for this suite. • [SLOW TEST:13.189 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":198,"skipped":3357,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:59:42.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 25 21:59:42.885: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 21:59:50.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8615" for this suite. • [SLOW TEST:7.365 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":199,"skipped":3366,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 21:59:50.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-csn7 STEP: Creating a pod to test atomic-volume-subpath Apr 25 21:59:50.316: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-csn7" in namespace "subpath-8796" to be "success or failure" Apr 25 21:59:50.319: INFO: Pod "pod-subpath-test-configmap-csn7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.446067ms Apr 25 21:59:52.367: INFO: Pod "pod-subpath-test-configmap-csn7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050814957s Apr 25 21:59:54.370: INFO: Pod "pod-subpath-test-configmap-csn7": Phase="Running", Reason="", readiness=true. Elapsed: 4.054243897s Apr 25 21:59:56.374: INFO: Pod "pod-subpath-test-configmap-csn7": Phase="Running", Reason="", readiness=true. Elapsed: 6.058404957s Apr 25 21:59:58.378: INFO: Pod "pod-subpath-test-configmap-csn7": Phase="Running", Reason="", readiness=true. Elapsed: 8.062439715s Apr 25 22:00:00.383: INFO: Pod "pod-subpath-test-configmap-csn7": Phase="Running", Reason="", readiness=true. Elapsed: 10.066800619s Apr 25 22:00:02.386: INFO: Pod "pod-subpath-test-configmap-csn7": Phase="Running", Reason="", readiness=true. Elapsed: 12.070610065s Apr 25 22:00:04.391: INFO: Pod "pod-subpath-test-configmap-csn7": Phase="Running", Reason="", readiness=true. Elapsed: 14.074891208s Apr 25 22:00:06.409: INFO: Pod "pod-subpath-test-configmap-csn7": Phase="Running", Reason="", readiness=true. Elapsed: 16.093304002s Apr 25 22:00:08.412: INFO: Pod "pod-subpath-test-configmap-csn7": Phase="Running", Reason="", readiness=true. Elapsed: 18.095958035s Apr 25 22:00:10.433: INFO: Pod "pod-subpath-test-configmap-csn7": Phase="Running", Reason="", readiness=true. Elapsed: 20.117105561s Apr 25 22:00:12.436: INFO: Pod "pod-subpath-test-configmap-csn7": Phase="Running", Reason="", readiness=true. Elapsed: 22.120482887s Apr 25 22:00:14.450: INFO: Pod "pod-subpath-test-configmap-csn7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.134066938s STEP: Saw pod success Apr 25 22:00:14.450: INFO: Pod "pod-subpath-test-configmap-csn7" satisfied condition "success or failure" Apr 25 22:00:14.461: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-csn7 container test-container-subpath-configmap-csn7: STEP: delete the pod Apr 25 22:00:14.507: INFO: Waiting for pod pod-subpath-test-configmap-csn7 to disappear Apr 25 22:00:14.539: INFO: Pod pod-subpath-test-configmap-csn7 no longer exists STEP: Deleting pod pod-subpath-test-configmap-csn7 Apr 25 22:00:14.539: INFO: Deleting pod "pod-subpath-test-configmap-csn7" in namespace "subpath-8796" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:00:14.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8796" for this suite. • [SLOW TEST:24.343 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":200,"skipped":3379,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:00:14.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:00:31.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8929" for this suite. • [SLOW TEST:17.171 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":201,"skipped":3394,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:00:31.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 25 22:00:31.824: INFO: Waiting up to 5m0s for pod "pod-b227d83c-c6c9-4600-95aa-bcc1a5033268" in namespace "emptydir-7242" to be "success or failure" Apr 25 22:00:31.864: INFO: Pod "pod-b227d83c-c6c9-4600-95aa-bcc1a5033268": Phase="Pending", Reason="", readiness=false. Elapsed: 39.675626ms Apr 25 22:00:33.868: INFO: Pod "pod-b227d83c-c6c9-4600-95aa-bcc1a5033268": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043984161s Apr 25 22:00:35.878: INFO: Pod "pod-b227d83c-c6c9-4600-95aa-bcc1a5033268": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053362948s STEP: Saw pod success Apr 25 22:00:35.878: INFO: Pod "pod-b227d83c-c6c9-4600-95aa-bcc1a5033268" satisfied condition "success or failure" Apr 25 22:00:35.880: INFO: Trying to get logs from node jerma-worker2 pod pod-b227d83c-c6c9-4600-95aa-bcc1a5033268 container test-container: STEP: delete the pod Apr 25 22:00:35.895: INFO: Waiting for pod pod-b227d83c-c6c9-4600-95aa-bcc1a5033268 to disappear Apr 25 22:00:35.900: INFO: Pod pod-b227d83c-c6c9-4600-95aa-bcc1a5033268 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:00:35.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7242" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3407,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:00:35.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-2d72a18c-b9f9-415a-84b0-de30d86a1040 STEP: Creating a pod to test consume configMaps Apr 25 22:00:35.975: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-10357cf3-75f9-40ea-8dee-1464d308765d" in namespace "projected-2431" to be "success or failure" Apr 25 22:00:36.020: INFO: Pod "pod-projected-configmaps-10357cf3-75f9-40ea-8dee-1464d308765d": Phase="Pending", Reason="", readiness=false. Elapsed: 44.792198ms Apr 25 22:00:38.024: INFO: Pod "pod-projected-configmaps-10357cf3-75f9-40ea-8dee-1464d308765d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049297623s Apr 25 22:00:40.028: INFO: Pod "pod-projected-configmaps-10357cf3-75f9-40ea-8dee-1464d308765d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053091244s STEP: Saw pod success Apr 25 22:00:40.028: INFO: Pod "pod-projected-configmaps-10357cf3-75f9-40ea-8dee-1464d308765d" satisfied condition "success or failure" Apr 25 22:00:40.031: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-10357cf3-75f9-40ea-8dee-1464d308765d container projected-configmap-volume-test: STEP: delete the pod Apr 25 22:00:40.072: INFO: Waiting for pod pod-projected-configmaps-10357cf3-75f9-40ea-8dee-1464d308765d to disappear Apr 25 22:00:40.087: INFO: Pod pod-projected-configmaps-10357cf3-75f9-40ea-8dee-1464d308765d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:00:40.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2431" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3430,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:00:40.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-b98333aa-ba8c-45de-b267-dfedefe63c08 STEP: Creating a pod to test consume configMaps Apr 25 22:00:40.168: INFO: Waiting up to 5m0s for pod "pod-configmaps-df7bb5b9-6c4c-4aec-8455-551da420f3d9" in namespace "configmap-5921" to be "success or failure" Apr 25 22:00:40.170: INFO: Pod "pod-configmaps-df7bb5b9-6c4c-4aec-8455-551da420f3d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.628504ms Apr 25 22:00:42.174: INFO: Pod "pod-configmaps-df7bb5b9-6c4c-4aec-8455-551da420f3d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006642229s Apr 25 22:00:44.178: INFO: Pod "pod-configmaps-df7bb5b9-6c4c-4aec-8455-551da420f3d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010443042s STEP: Saw pod success Apr 25 22:00:44.178: INFO: Pod "pod-configmaps-df7bb5b9-6c4c-4aec-8455-551da420f3d9" satisfied condition "success or failure" Apr 25 22:00:44.181: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-df7bb5b9-6c4c-4aec-8455-551da420f3d9 container configmap-volume-test: STEP: delete the pod Apr 25 22:00:44.200: INFO: Waiting for pod pod-configmaps-df7bb5b9-6c4c-4aec-8455-551da420f3d9 to disappear Apr 25 22:00:44.204: INFO: Pod pod-configmaps-df7bb5b9-6c4c-4aec-8455-551da420f3d9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:00:44.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5921" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3442,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:00:44.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 25 22:00:48.890: INFO: Successfully updated pod "labelsupdate79599b02-09ad-48f2-a9df-58f9bc0aab31" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:00:50.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6295" for this suite. • [SLOW TEST:6.740 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3471,"failed":0} [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:00:50.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all Apr 25 22:00:51.033: INFO: Waiting up to 5m0s for pod "client-containers-4f8f6155-0425-452e-96ac-988b099c6702" in namespace "containers-9113" to be "success or failure" Apr 25 22:00:51.075: INFO: Pod "client-containers-4f8f6155-0425-452e-96ac-988b099c6702": Phase="Pending", Reason="", readiness=false. Elapsed: 41.566014ms Apr 25 22:00:53.092: INFO: Pod "client-containers-4f8f6155-0425-452e-96ac-988b099c6702": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059150169s Apr 25 22:00:55.096: INFO: Pod "client-containers-4f8f6155-0425-452e-96ac-988b099c6702": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063112899s STEP: Saw pod success Apr 25 22:00:55.096: INFO: Pod "client-containers-4f8f6155-0425-452e-96ac-988b099c6702" satisfied condition "success or failure" Apr 25 22:00:55.099: INFO: Trying to get logs from node jerma-worker2 pod client-containers-4f8f6155-0425-452e-96ac-988b099c6702 container test-container: STEP: delete the pod Apr 25 22:00:55.118: INFO: Waiting for pod client-containers-4f8f6155-0425-452e-96ac-988b099c6702 to disappear Apr 25 22:00:55.122: INFO: Pod client-containers-4f8f6155-0425-452e-96ac-988b099c6702 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:00:55.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9113" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3471,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:00:55.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 22:00:55.632: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 22:00:57.643: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448855, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448855, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448855, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723448855, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 22:01:00.685: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:01:12.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8664" for this suite. STEP: Destroying namespace "webhook-8664-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:17.882 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":207,"skipped":3484,"failed":0} [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:01:13.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-5e2a3dd1-c781-4cda-b2a1-13a1dece80b2 STEP: Creating a pod to test consume secrets Apr 25 22:01:13.079: INFO: Waiting up to 5m0s for pod "pod-secrets-446b1e93-0507-41af-af87-df84f63de1de" in namespace "secrets-3486" to be "success or failure" Apr 25 22:01:13.082: INFO: Pod "pod-secrets-446b1e93-0507-41af-af87-df84f63de1de": Phase="Pending", Reason="", readiness=false. Elapsed: 3.182237ms Apr 25 22:01:15.086: INFO: Pod "pod-secrets-446b1e93-0507-41af-af87-df84f63de1de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007620135s Apr 25 22:01:17.091: INFO: Pod "pod-secrets-446b1e93-0507-41af-af87-df84f63de1de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01206303s STEP: Saw pod success Apr 25 22:01:17.091: INFO: Pod "pod-secrets-446b1e93-0507-41af-af87-df84f63de1de" satisfied condition "success or failure" Apr 25 22:01:17.094: INFO: Trying to get logs from node jerma-worker pod pod-secrets-446b1e93-0507-41af-af87-df84f63de1de container secret-volume-test: STEP: delete the pod Apr 25 22:01:17.135: INFO: Waiting for pod pod-secrets-446b1e93-0507-41af-af87-df84f63de1de to disappear Apr 25 22:01:17.142: INFO: Pod pod-secrets-446b1e93-0507-41af-af87-df84f63de1de no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:01:17.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3486" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3484,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:01:17.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 25 22:01:17.221: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25e22d22-97db-4221-8e4d-383a7c700d9e" in namespace "downward-api-7724" to be "success or failure" Apr 25 22:01:17.225: INFO: Pod "downwardapi-volume-25e22d22-97db-4221-8e4d-383a7c700d9e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.131731ms Apr 25 22:01:19.243: INFO: Pod "downwardapi-volume-25e22d22-97db-4221-8e4d-383a7c700d9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021520488s Apr 25 22:01:21.247: INFO: Pod "downwardapi-volume-25e22d22-97db-4221-8e4d-383a7c700d9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025930966s STEP: Saw pod success Apr 25 22:01:21.247: INFO: Pod "downwardapi-volume-25e22d22-97db-4221-8e4d-383a7c700d9e" satisfied condition "success or failure" Apr 25 22:01:21.250: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-25e22d22-97db-4221-8e4d-383a7c700d9e container client-container: STEP: delete the pod Apr 25 22:01:21.267: INFO: Waiting for pod downwardapi-volume-25e22d22-97db-4221-8e4d-383a7c700d9e to disappear Apr 25 22:01:21.271: INFO: Pod downwardapi-volume-25e22d22-97db-4221-8e4d-383a7c700d9e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:01:21.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7724" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":209,"skipped":3500,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:01:21.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-956c7f89-70d6-455b-9047-5516e7b40971 in namespace container-probe-3560 Apr 25 22:01:25.382: INFO: Started pod busybox-956c7f89-70d6-455b-9047-5516e7b40971 in namespace container-probe-3560 STEP: checking the pod's current state and verifying that restartCount is present Apr 25 22:01:25.384: INFO: Initial restart count of pod busybox-956c7f89-70d6-455b-9047-5516e7b40971 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:05:25.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3560" for this suite. • [SLOW TEST:244.673 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":210,"skipped":3506,"failed":0} SSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:05:25.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin Apr 25 22:05:26.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3528 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 25 22:05:32.013: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0425 22:05:31.929610 2712 log.go:172] (0xc0005b2630) (0xc000677a40) Create stream\nI0425 22:05:31.929637 2712 log.go:172] (0xc0005b2630) (0xc000677a40) Stream added, broadcasting: 1\nI0425 22:05:31.931938 2712 log.go:172] (0xc0005b2630) Reply frame received for 1\nI0425 22:05:31.931977 2712 log.go:172] (0xc0005b2630) (0xc000315220) Create stream\nI0425 22:05:31.931990 2712 log.go:172] (0xc0005b2630) (0xc000315220) Stream added, broadcasting: 3\nI0425 22:05:31.932850 2712 log.go:172] (0xc0005b2630) Reply frame received for 3\nI0425 22:05:31.932872 2712 log.go:172] (0xc0005b2630) (0xc000677ae0) Create stream\nI0425 22:05:31.932880 2712 log.go:172] (0xc0005b2630) (0xc000677ae0) Stream added, broadcasting: 5\nI0425 22:05:31.933954 2712 log.go:172] (0xc0005b2630) Reply frame received for 5\nI0425 22:05:31.934015 2712 log.go:172] (0xc0005b2630) (0xc0008ea0a0) Create stream\nI0425 22:05:31.934041 2712 log.go:172] (0xc0005b2630) (0xc0008ea0a0) Stream added, broadcasting: 7\nI0425 22:05:31.935006 2712 log.go:172] (0xc0005b2630) Reply frame received for 7\nI0425 22:05:31.935124 2712 log.go:172] (0xc000315220) (3) Writing data frame\nI0425 22:05:31.935240 2712 log.go:172] (0xc000315220) (3) Writing data frame\nI0425 22:05:31.936093 2712 log.go:172] (0xc0005b2630) Data frame received for 5\nI0425 22:05:31.936107 2712 log.go:172] (0xc000677ae0) (5) Data frame handling\nI0425 22:05:31.936124 2712 log.go:172] (0xc000677ae0) (5) Data frame sent\nI0425 22:05:31.936949 2712 log.go:172] (0xc0005b2630) Data frame received for 5\nI0425 22:05:31.936983 2712 log.go:172] (0xc000677ae0) (5) Data frame handling\nI0425 22:05:31.937010 2712 log.go:172] (0xc000677ae0) (5) Data frame sent\nI0425 22:05:31.979202 2712 log.go:172] (0xc0005b2630) Data frame received for 5\nI0425 22:05:31.979228 2712 log.go:172] (0xc000677ae0) (5) Data frame handling\nI0425 22:05:31.979254 2712 log.go:172] (0xc0005b2630) Data frame received for 7\nI0425 22:05:31.979281 2712 log.go:172] (0xc0008ea0a0) (7) Data frame handling\nI0425 22:05:31.979809 2712 log.go:172] (0xc0005b2630) Data frame received for 1\nI0425 22:05:31.979828 2712 log.go:172] (0xc000677a40) (1) Data frame handling\nI0425 22:05:31.979837 2712 log.go:172] (0xc000677a40) (1) Data frame sent\nI0425 22:05:31.979846 2712 log.go:172] (0xc0005b2630) (0xc000677a40) Stream removed, broadcasting: 1\nI0425 22:05:31.979893 2712 log.go:172] (0xc0005b2630) (0xc000315220) Stream removed, broadcasting: 3\nI0425 22:05:31.980101 2712 log.go:172] (0xc0005b2630) Go away received\nI0425 22:05:31.980146 2712 log.go:172] (0xc0005b2630) (0xc000677a40) Stream removed, broadcasting: 1\nI0425 22:05:31.980182 2712 log.go:172] (0xc0005b2630) (0xc000315220) Stream removed, broadcasting: 3\nI0425 22:05:31.980207 2712 log.go:172] (0xc0005b2630) (0xc000677ae0) Stream removed, broadcasting: 5\nI0425 22:05:31.980228 2712 log.go:172] (0xc0005b2630) (0xc0008ea0a0) Stream removed, broadcasting: 7\n" Apr 25 22:05:32.014: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:05:34.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3528" for this suite. • [SLOW TEST:8.056 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":211,"skipped":3512,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:05:34.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-41a1a398-a18e-4acf-b4a0-98baedf3a71e STEP: Creating a pod to test consume secrets Apr 25 22:05:34.111: INFO: Waiting up to 5m0s for pod "pod-secrets-761b0d24-ccf9-4f2d-bc23-8da20ebe34b3" in namespace "secrets-2448" to be "success or failure" Apr 25 22:05:34.115: INFO: Pod "pod-secrets-761b0d24-ccf9-4f2d-bc23-8da20ebe34b3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.699633ms Apr 25 22:05:36.119: INFO: Pod "pod-secrets-761b0d24-ccf9-4f2d-bc23-8da20ebe34b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008039578s Apr 25 22:05:38.123: INFO: Pod "pod-secrets-761b0d24-ccf9-4f2d-bc23-8da20ebe34b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012155477s STEP: Saw pod success Apr 25 22:05:38.123: INFO: Pod "pod-secrets-761b0d24-ccf9-4f2d-bc23-8da20ebe34b3" satisfied condition "success or failure" Apr 25 22:05:38.127: INFO: Trying to get logs from node jerma-worker pod pod-secrets-761b0d24-ccf9-4f2d-bc23-8da20ebe34b3 container secret-volume-test: STEP: delete the pod Apr 25 22:05:38.158: INFO: Waiting for pod pod-secrets-761b0d24-ccf9-4f2d-bc23-8da20ebe34b3 to disappear Apr 25 22:05:38.186: INFO: Pod pod-secrets-761b0d24-ccf9-4f2d-bc23-8da20ebe34b3 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:05:38.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2448" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":212,"skipped":3514,"failed":0} SSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:05:38.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1591.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1591.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1591.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1591.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1591.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1591.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1591.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1591.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1591.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1591.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1591.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 185.26.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.26.185_udp@PTR;check="$$(dig +tcp +noall +answer +search 185.26.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.26.185_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1591.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1591.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1591.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1591.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1591.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1591.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1591.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1591.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1591.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1591.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1591.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 185.26.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.26.185_udp@PTR;check="$$(dig +tcp +noall +answer +search 185.26.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.26.185_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 25 22:05:44.399: INFO: Unable to read wheezy_udp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:44.402: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:44.406: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:44.409: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:44.431: INFO: Unable to read jessie_udp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:44.434: INFO: Unable to read jessie_tcp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:44.438: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:44.441: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:44.458: INFO: Lookups using dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e failed for: [wheezy_udp@dns-test-service.dns-1591.svc.cluster.local wheezy_tcp@dns-test-service.dns-1591.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local jessie_udp@dns-test-service.dns-1591.svc.cluster.local jessie_tcp@dns-test-service.dns-1591.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local] Apr 25 22:05:49.462: INFO: Unable to read wheezy_udp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:49.466: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:49.469: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:49.472: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:49.493: INFO: Unable to read jessie_udp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:49.495: INFO: Unable to read jessie_tcp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:49.498: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:49.501: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:49.518: INFO: Lookups using dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e failed for: [wheezy_udp@dns-test-service.dns-1591.svc.cluster.local wheezy_tcp@dns-test-service.dns-1591.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local jessie_udp@dns-test-service.dns-1591.svc.cluster.local jessie_tcp@dns-test-service.dns-1591.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local] Apr 25 22:05:54.463: INFO: Unable to read wheezy_udp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:54.467: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:54.470: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:54.473: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:54.493: INFO: Unable to read jessie_udp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:54.497: INFO: Unable to read jessie_tcp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:54.500: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:54.503: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:54.521: INFO: Lookups using dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e failed for: [wheezy_udp@dns-test-service.dns-1591.svc.cluster.local wheezy_tcp@dns-test-service.dns-1591.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local jessie_udp@dns-test-service.dns-1591.svc.cluster.local jessie_tcp@dns-test-service.dns-1591.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local] Apr 25 22:05:59.505: INFO: Unable to read wheezy_udp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:59.518: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:59.521: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:59.524: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:59.544: INFO: Unable to read jessie_udp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:59.546: INFO: Unable to read jessie_tcp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:59.549: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:59.552: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:05:59.570: INFO: Lookups using dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e failed for: [wheezy_udp@dns-test-service.dns-1591.svc.cluster.local wheezy_tcp@dns-test-service.dns-1591.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local jessie_udp@dns-test-service.dns-1591.svc.cluster.local jessie_tcp@dns-test-service.dns-1591.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local] Apr 25 22:06:04.463: INFO: Unable to read wheezy_udp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:06:04.466: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:06:04.470: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:06:04.473: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:06:04.496: INFO: Unable to read jessie_udp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:06:04.499: INFO: Unable to read jessie_tcp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:06:04.503: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:06:04.506: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:06:04.524: INFO: Lookups using dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e failed for: [wheezy_udp@dns-test-service.dns-1591.svc.cluster.local wheezy_tcp@dns-test-service.dns-1591.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local jessie_udp@dns-test-service.dns-1591.svc.cluster.local jessie_tcp@dns-test-service.dns-1591.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local] Apr 25 22:06:09.462: INFO: Unable to read wheezy_udp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:06:09.465: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:06:09.467: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:06:09.470: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:06:09.480: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: Get https://172.30.12.66:32770/api/v1/namespaces/dns-1591/pods/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e/proxy/results/wheezy_tcp@PodARecord: stream error: stream ID 9609; INTERNAL_ERROR Apr 25 22:06:09.498: INFO: Unable to read jessie_udp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:06:09.502: INFO: Unable to read jessie_tcp@dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:06:09.504: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:06:09.506: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local from pod dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e: the server could not find the requested resource (get pods dns-test-4d7d0102-5103-415a-8c55-9b899bed601e) Apr 25 22:06:09.523: INFO: Lookups using dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e failed for: [wheezy_udp@dns-test-service.dns-1591.svc.cluster.local wheezy_tcp@dns-test-service.dns-1591.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local wheezy_tcp@PodARecord jessie_udp@dns-test-service.dns-1591.svc.cluster.local jessie_tcp@dns-test-service.dns-1591.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1591.svc.cluster.local] Apr 25 22:06:14.544: INFO: DNS probes using dns-1591/dns-test-4d7d0102-5103-415a-8c55-9b899bed601e succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:06:15.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1591" for this suite. • [SLOW TEST:37.006 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":213,"skipped":3518,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:06:15.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 22:06:15.626: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 22:06:17.636: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449175, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449175, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449175, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449175, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 22:06:20.703: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:06:20.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4301" for this suite. STEP: Destroying namespace "webhook-4301-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.709 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":214,"skipped":3527,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:06:20.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 22:06:20.974: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.816942ms) Apr 25 22:06:20.977: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.988714ms) Apr 25 22:06:20.980: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.799811ms) Apr 25 22:06:20.983: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.82574ms) Apr 25 22:06:20.986: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.815726ms) Apr 25 22:06:20.989: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.82655ms) Apr 25 22:06:20.992: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.014918ms) Apr 25 22:06:21.008: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 16.055784ms) Apr 25 22:06:21.011: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.319165ms) Apr 25 22:06:21.014: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.130617ms) Apr 25 22:06:21.017: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.639306ms) Apr 25 22:06:21.020: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.313531ms) Apr 25 22:06:21.023: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.759523ms) Apr 25 22:06:21.026: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.390497ms) Apr 25 22:06:21.030: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.229284ms) Apr 25 22:06:21.033: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.554008ms) Apr 25 22:06:21.037: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.701333ms) Apr 25 22:06:21.040: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.201207ms) Apr 25 22:06:21.043: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.209996ms) Apr 25 22:06:21.046: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.525961ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:06:21.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8764" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":215,"skipped":3561,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:06:21.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod Apr 25 22:06:21.164: INFO: PodSpec: initContainers in spec.initContainers Apr 25 22:07:12.022: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9ca88df8-b1d1-4592-8ed9-589f6b630f7b", GenerateName:"", Namespace:"init-container-8493", SelfLink:"/api/v1/namespaces/init-container-8493/pods/pod-init-9ca88df8-b1d1-4592-8ed9-589f6b630f7b", UID:"a731090d-7f97-4128-a612-e2b746654e09", ResourceVersion:"11031808", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723449181, loc:(*time.Location)(0x78ee080)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"164728972"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-dpw9k", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0015dc580), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dpw9k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dpw9k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-dpw9k", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002b55888), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0026100c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002b55ba0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002b55bc0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002b55bc8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002b55bcc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449181, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449181, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449181, loc:(*time.Location)(0x78ee080)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449181, loc:(*time.Location)(0x78ee080)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.240", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.240"}}, StartTime:(*v1.Time)(0xc001ec83a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001e48380)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001e483f0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://47980d06a7c53574fdb9286905f0af1d6de3ddd10ef2a2fe681c61f296b222b2", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001ec83e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001ec83c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc002b55c4f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:07:12.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8493" for this suite. • [SLOW TEST:50.997 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":216,"skipped":3590,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:07:12.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 22:07:12.621: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 22:07:14.636: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449232, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449232, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449232, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449232, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 22:07:17.703: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 22:07:17.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:07:18.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1432" for this suite. STEP: Destroying namespace "webhook-1432-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.637 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":217,"skipped":3598,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:07:18.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod Apr 25 22:07:22.774: INFO: Pod pod-hostip-4e895913-6377-4788-a17c-e091431c9e1a has hostIP: 172.17.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:07:22.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7113" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3632,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:07:22.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 22:07:22.923: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"e8a422d6-ca9a-458d-8e4e-b5eba54f7995", Controller:(*bool)(0xc004dfb342), BlockOwnerDeletion:(*bool)(0xc004dfb343)}} Apr 25 22:07:22.973: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"3daf71d1-1dbf-48c0-8133-11560fab3acb", Controller:(*bool)(0xc005c7fbaa), BlockOwnerDeletion:(*bool)(0xc005c7fbab)}} Apr 25 22:07:23.034: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"52ab9d35-b4c4-48f0-a9a2-36723d2ed339", Controller:(*bool)(0xc00454b20a), BlockOwnerDeletion:(*bool)(0xc00454b20b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:07:28.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2156" for this suite. • [SLOW TEST:5.379 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":219,"skipped":3652,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:07:28.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 25 22:07:28.271: INFO: Waiting up to 5m0s for pod "pod-7f9b604e-0114-435b-8f66-34c2053fa6cc" in namespace "emptydir-540" to be "success or failure" Apr 25 22:07:28.274: INFO: Pod "pod-7f9b604e-0114-435b-8f66-34c2053fa6cc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.204444ms Apr 25 22:07:30.278: INFO: Pod "pod-7f9b604e-0114-435b-8f66-34c2053fa6cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007444165s Apr 25 22:07:32.283: INFO: Pod "pod-7f9b604e-0114-435b-8f66-34c2053fa6cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011661965s STEP: Saw pod success Apr 25 22:07:32.283: INFO: Pod "pod-7f9b604e-0114-435b-8f66-34c2053fa6cc" satisfied condition "success or failure" Apr 25 22:07:32.286: INFO: Trying to get logs from node jerma-worker2 pod pod-7f9b604e-0114-435b-8f66-34c2053fa6cc container test-container: STEP: delete the pod Apr 25 22:07:32.318: INFO: Waiting for pod pod-7f9b604e-0114-435b-8f66-34c2053fa6cc to disappear Apr 25 22:07:32.322: INFO: Pod pod-7f9b604e-0114-435b-8f66-34c2053fa6cc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:07:32.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-540" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3678,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:07:32.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 25 22:07:40.489: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 25 22:07:40.548: INFO: Pod pod-with-poststart-exec-hook still exists Apr 25 22:07:42.548: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 25 22:07:42.552: INFO: Pod pod-with-poststart-exec-hook still exists Apr 25 22:07:44.548: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 25 22:07:44.552: INFO: Pod pod-with-poststart-exec-hook still exists Apr 25 22:07:46.548: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 25 22:07:46.552: INFO: Pod pod-with-poststart-exec-hook still exists Apr 25 22:07:48.548: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 25 22:07:48.552: INFO: Pod pod-with-poststart-exec-hook still exists Apr 25 22:07:50.548: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 25 22:07:50.552: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:07:50.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5764" for this suite. • [SLOW TEST:18.233 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3683,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:07:50.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 25 22:07:50.655: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f1a0f4f0-8f39-4c56-a28f-8a1ec9272cbf" in namespace "downward-api-7715" to be "success or failure" Apr 25 22:07:50.672: INFO: Pod "downwardapi-volume-f1a0f4f0-8f39-4c56-a28f-8a1ec9272cbf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.814622ms Apr 25 22:07:52.676: INFO: Pod "downwardapi-volume-f1a0f4f0-8f39-4c56-a28f-8a1ec9272cbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021000506s Apr 25 22:07:54.679: INFO: Pod "downwardapi-volume-f1a0f4f0-8f39-4c56-a28f-8a1ec9272cbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024487088s STEP: Saw pod success Apr 25 22:07:54.679: INFO: Pod "downwardapi-volume-f1a0f4f0-8f39-4c56-a28f-8a1ec9272cbf" satisfied condition "success or failure" Apr 25 22:07:54.682: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f1a0f4f0-8f39-4c56-a28f-8a1ec9272cbf container client-container: STEP: delete the pod Apr 25 22:07:54.752: INFO: Waiting for pod downwardapi-volume-f1a0f4f0-8f39-4c56-a28f-8a1ec9272cbf to disappear Apr 25 22:07:54.778: INFO: Pod downwardapi-volume-f1a0f4f0-8f39-4c56-a28f-8a1ec9272cbf no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:07:54.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7715" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3684,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:07:54.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 25 22:08:00.863: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6726 PodName:pod-sharedvolume-b9006375-eafc-4d93-9033-00c3b0c6f2c8 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 22:08:00.863: INFO: >>> kubeConfig: /root/.kube/config I0425 22:08:00.896316 6 log.go:172] (0xc004898fd0) (0xc001e59cc0) Create stream I0425 22:08:00.896351 6 log.go:172] (0xc004898fd0) (0xc001e59cc0) Stream added, broadcasting: 1 I0425 22:08:00.898507 6 log.go:172] (0xc004898fd0) Reply frame received for 1 I0425 22:08:00.898562 6 log.go:172] (0xc004898fd0) (0xc0028b7540) Create stream I0425 22:08:00.898578 6 log.go:172] (0xc004898fd0) (0xc0028b7540) Stream added, broadcasting: 3 I0425 22:08:00.899552 6 log.go:172] (0xc004898fd0) Reply frame received for 3 I0425 22:08:00.899607 6 log.go:172] (0xc004898fd0) (0xc001748000) Create stream I0425 22:08:00.899633 6 log.go:172] (0xc004898fd0) (0xc001748000) Stream added, broadcasting: 5 I0425 22:08:00.900694 6 log.go:172] (0xc004898fd0) Reply frame received for 5 I0425 22:08:00.984263 6 log.go:172] (0xc004898fd0) Data frame received for 5 I0425 22:08:00.984311 6 log.go:172] (0xc001748000) (5) Data frame handling I0425 22:08:00.984336 6 log.go:172] (0xc004898fd0) Data frame received for 3 I0425 22:08:00.984352 6 log.go:172] (0xc0028b7540) (3) Data frame handling I0425 22:08:00.984363 6 log.go:172] (0xc0028b7540) (3) Data frame sent I0425 22:08:00.984375 6 log.go:172] (0xc004898fd0) Data frame received for 3 I0425 22:08:00.984386 6 log.go:172] (0xc0028b7540) (3) Data frame handling I0425 22:08:00.986114 6 log.go:172] (0xc004898fd0) Data frame received for 1 I0425 22:08:00.986147 6 log.go:172] (0xc001e59cc0) (1) Data frame handling I0425 22:08:00.986170 6 log.go:172] (0xc001e59cc0) (1) Data frame sent I0425 22:08:00.986427 6 log.go:172] (0xc004898fd0) (0xc001e59cc0) Stream removed, broadcasting: 1 I0425 22:08:00.986472 6 log.go:172] (0xc004898fd0) Go away received I0425 22:08:00.986533 6 log.go:172] (0xc004898fd0) (0xc001e59cc0) Stream removed, broadcasting: 1 I0425 22:08:00.986561 6 log.go:172] (0xc004898fd0) (0xc0028b7540) Stream removed, broadcasting: 3 I0425 22:08:00.986581 6 log.go:172] (0xc004898fd0) (0xc001748000) Stream removed, broadcasting: 5 Apr 25 22:08:00.986: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:08:00.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6726" for this suite. • [SLOW TEST:6.209 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":223,"skipped":3687,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:08:00.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:08:05.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-5701" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":224,"skipped":3705,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:08:05.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7348 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7348 STEP: Creating statefulset with conflicting port in namespace statefulset-7348 STEP: Waiting until pod test-pod will start running in namespace statefulset-7348 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7348 Apr 25 22:08:09.532: INFO: Observed stateful pod in namespace: statefulset-7348, name: ss-0, uid: 29ce6c90-fac0-4429-9f0e-c6c197f16701, status phase: Pending. Waiting for statefulset controller to delete. Apr 25 22:08:09.847: INFO: Observed stateful pod in namespace: statefulset-7348, name: ss-0, uid: 29ce6c90-fac0-4429-9f0e-c6c197f16701, status phase: Failed. Waiting for statefulset controller to delete. Apr 25 22:08:09.858: INFO: Observed stateful pod in namespace: statefulset-7348, name: ss-0, uid: 29ce6c90-fac0-4429-9f0e-c6c197f16701, status phase: Failed. Waiting for statefulset controller to delete. Apr 25 22:08:09.868: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7348 STEP: Removing pod with conflicting port in namespace statefulset-7348 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7348 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 25 22:08:14.027: INFO: Deleting all statefulset in ns statefulset-7348 Apr 25 22:08:14.030: INFO: Scaling statefulset ss to 0 Apr 25 22:08:24.046: INFO: Waiting for statefulset status.replicas updated to 0 Apr 25 22:08:24.051: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:08:24.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7348" for this suite. • [SLOW TEST:19.008 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":225,"skipped":3721,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:08:24.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:08:24.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3379" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":226,"skipped":3733,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:08:24.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-06d2f348-10c4-4b77-90ee-bde57da6d7ed STEP: Creating a pod to test consume secrets Apr 25 22:08:24.320: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c6442370-920c-4fcd-8a2a-c7da050b36d7" in namespace "projected-1337" to be "success or failure" Apr 25 22:08:24.332: INFO: Pod "pod-projected-secrets-c6442370-920c-4fcd-8a2a-c7da050b36d7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.924457ms Apr 25 22:08:26.336: INFO: Pod "pod-projected-secrets-c6442370-920c-4fcd-8a2a-c7da050b36d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016345798s Apr 25 22:08:28.341: INFO: Pod "pod-projected-secrets-c6442370-920c-4fcd-8a2a-c7da050b36d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02096536s STEP: Saw pod success Apr 25 22:08:28.341: INFO: Pod "pod-projected-secrets-c6442370-920c-4fcd-8a2a-c7da050b36d7" satisfied condition "success or failure" Apr 25 22:08:28.344: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-c6442370-920c-4fcd-8a2a-c7da050b36d7 container projected-secret-volume-test: STEP: delete the pod Apr 25 22:08:28.423: INFO: Waiting for pod pod-projected-secrets-c6442370-920c-4fcd-8a2a-c7da050b36d7 to disappear Apr 25 22:08:28.506: INFO: Pod pod-projected-secrets-c6442370-920c-4fcd-8a2a-c7da050b36d7 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:08:28.506: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1337" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3747,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:08:28.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Apr 25 22:08:28.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3813' Apr 25 22:08:28.893: INFO: stderr: "" Apr 25 22:08:28.893: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 25 22:08:28.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3813' Apr 25 22:08:28.991: INFO: stderr: "" Apr 25 22:08:28.991: INFO: stdout: "update-demo-nautilus-bcq6p update-demo-nautilus-mhlfk " Apr 25 22:08:28.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bcq6p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3813' Apr 25 22:08:29.087: INFO: stderr: "" Apr 25 22:08:29.087: INFO: stdout: "" Apr 25 22:08:29.087: INFO: update-demo-nautilus-bcq6p is created but not running Apr 25 22:08:34.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3813' Apr 25 22:08:34.190: INFO: stderr: "" Apr 25 22:08:34.190: INFO: stdout: "update-demo-nautilus-bcq6p update-demo-nautilus-mhlfk " Apr 25 22:08:34.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bcq6p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3813' Apr 25 22:08:34.274: INFO: stderr: "" Apr 25 22:08:34.274: INFO: stdout: "true" Apr 25 22:08:34.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bcq6p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3813' Apr 25 22:08:34.369: INFO: stderr: "" Apr 25 22:08:34.369: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 25 22:08:34.369: INFO: validating pod update-demo-nautilus-bcq6p Apr 25 22:08:34.373: INFO: got data: { "image": "nautilus.jpg" } Apr 25 22:08:34.373: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 25 22:08:34.373: INFO: update-demo-nautilus-bcq6p is verified up and running Apr 25 22:08:34.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mhlfk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3813' Apr 25 22:08:34.467: INFO: stderr: "" Apr 25 22:08:34.467: INFO: stdout: "true" Apr 25 22:08:34.467: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mhlfk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3813' Apr 25 22:08:34.556: INFO: stderr: "" Apr 25 22:08:34.556: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 25 22:08:34.556: INFO: validating pod update-demo-nautilus-mhlfk Apr 25 22:08:34.560: INFO: got data: { "image": "nautilus.jpg" } Apr 25 22:08:34.560: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 25 22:08:34.560: INFO: update-demo-nautilus-mhlfk is verified up and running STEP: using delete to clean up resources Apr 25 22:08:34.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3813' Apr 25 22:08:34.664: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 25 22:08:34.664: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 25 22:08:34.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3813' Apr 25 22:08:34.763: INFO: stderr: "No resources found in kubectl-3813 namespace.\n" Apr 25 22:08:34.763: INFO: stdout: "" Apr 25 22:08:34.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3813 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 25 22:08:34.905: INFO: stderr: "" Apr 25 22:08:34.905: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:08:34.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3813" for this suite. • [SLOW TEST:6.398 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":228,"skipped":3760,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:08:34.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 22:08:35.135: INFO: Create a RollingUpdate DaemonSet Apr 25 22:08:35.138: INFO: Check that daemon pods launch on every node of the cluster Apr 25 22:08:35.191: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 22:08:35.194: INFO: Number of nodes with available pods: 0 Apr 25 22:08:35.194: INFO: Node jerma-worker is running more than one daemon pod Apr 25 22:08:36.198: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 22:08:36.250: INFO: Number of nodes with available pods: 0 Apr 25 22:08:36.251: INFO: Node jerma-worker is running more than one daemon pod Apr 25 22:08:37.410: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 22:08:37.525: INFO: Number of nodes with available pods: 0 Apr 25 22:08:37.525: INFO: Node jerma-worker is running more than one daemon pod Apr 25 22:08:38.199: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 22:08:38.202: INFO: Number of nodes with available pods: 0 Apr 25 22:08:38.202: INFO: Node jerma-worker is running more than one daemon pod Apr 25 22:08:39.206: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 22:08:39.232: INFO: Number of nodes with available pods: 2 Apr 25 22:08:39.232: INFO: Number of running nodes: 2, number of available pods: 2 Apr 25 22:08:39.232: INFO: Update the DaemonSet to trigger a rollout Apr 25 22:08:39.258: INFO: Updating DaemonSet daemon-set Apr 25 22:08:50.330: INFO: Roll back the DaemonSet before rollout is complete Apr 25 22:08:50.337: INFO: Updating DaemonSet daemon-set Apr 25 22:08:50.337: INFO: Make sure DaemonSet rollback is complete Apr 25 22:08:50.342: INFO: Wrong image for pod: daemon-set-8pwqn. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 25 22:08:50.342: INFO: Pod daemon-set-8pwqn is not available Apr 25 22:08:50.349: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 22:08:51.353: INFO: Wrong image for pod: daemon-set-8pwqn. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Apr 25 22:08:51.353: INFO: Pod daemon-set-8pwqn is not available Apr 25 22:08:51.358: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 25 22:08:52.352: INFO: Pod daemon-set-jmgxm is not available Apr 25 22:08:52.356: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-221, will wait for the garbage collector to delete the pods Apr 25 22:08:52.420: INFO: Deleting DaemonSet.extensions daemon-set took: 6.220538ms Apr 25 22:08:52.720: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.29608ms Apr 25 22:08:59.324: INFO: Number of nodes with available pods: 0 Apr 25 22:08:59.324: INFO: Number of running nodes: 0, number of available pods: 0 Apr 25 22:08:59.327: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-221/daemonsets","resourceVersion":"11032738"},"items":null} Apr 25 22:08:59.330: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-221/pods","resourceVersion":"11032738"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:08:59.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-221" for this suite. • [SLOW TEST:24.433 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":229,"skipped":3808,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:08:59.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-854e8bdc-fc7c-48bf-b46c-5c8632e5d704 STEP: Creating a pod to test consume configMaps Apr 25 22:08:59.458: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e71b5bed-5983-4b30-8aa8-1d32683de785" in namespace "projected-3472" to be "success or failure" Apr 25 22:08:59.462: INFO: Pod "pod-projected-configmaps-e71b5bed-5983-4b30-8aa8-1d32683de785": Phase="Pending", Reason="", readiness=false. Elapsed: 3.221274ms Apr 25 22:09:01.465: INFO: Pod "pod-projected-configmaps-e71b5bed-5983-4b30-8aa8-1d32683de785": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006353256s Apr 25 22:09:03.468: INFO: Pod "pod-projected-configmaps-e71b5bed-5983-4b30-8aa8-1d32683de785": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009544588s STEP: Saw pod success Apr 25 22:09:03.468: INFO: Pod "pod-projected-configmaps-e71b5bed-5983-4b30-8aa8-1d32683de785" satisfied condition "success or failure" Apr 25 22:09:03.470: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-e71b5bed-5983-4b30-8aa8-1d32683de785 container projected-configmap-volume-test: STEP: delete the pod Apr 25 22:09:03.506: INFO: Waiting for pod pod-projected-configmaps-e71b5bed-5983-4b30-8aa8-1d32683de785 to disappear Apr 25 22:09:03.517: INFO: Pod pod-projected-configmaps-e71b5bed-5983-4b30-8aa8-1d32683de785 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:09:03.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3472" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3830,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:09:03.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-bc2876a7-8e59-4e1b-8f98-46a7fb35dc51 STEP: Creating a pod to test consume configMaps Apr 25 22:09:03.627: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-43d954b9-05a3-45b8-b0f3-2a458e85d9ad" in namespace "projected-4022" to be "success or failure" Apr 25 22:09:03.631: INFO: Pod "pod-projected-configmaps-43d954b9-05a3-45b8-b0f3-2a458e85d9ad": Phase="Pending", Reason="", readiness=false. Elapsed: 3.883435ms Apr 25 22:09:05.635: INFO: Pod "pod-projected-configmaps-43d954b9-05a3-45b8-b0f3-2a458e85d9ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008171339s Apr 25 22:09:07.711: INFO: Pod "pod-projected-configmaps-43d954b9-05a3-45b8-b0f3-2a458e85d9ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083604745s STEP: Saw pod success Apr 25 22:09:07.711: INFO: Pod "pod-projected-configmaps-43d954b9-05a3-45b8-b0f3-2a458e85d9ad" satisfied condition "success or failure" Apr 25 22:09:07.713: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-43d954b9-05a3-45b8-b0f3-2a458e85d9ad container projected-configmap-volume-test: STEP: delete the pod Apr 25 22:09:07.761: INFO: Waiting for pod pod-projected-configmaps-43d954b9-05a3-45b8-b0f3-2a458e85d9ad to disappear Apr 25 22:09:07.769: INFO: Pod pod-projected-configmaps-43d954b9-05a3-45b8-b0f3-2a458e85d9ad no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:09:07.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4022" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3871,"failed":0} ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:09:07.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy Apr 25 22:09:07.856: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix907175166/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:09:07.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4841" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":232,"skipped":3871,"failed":0} SSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:09:07.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:09:08.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-3773" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":233,"skipped":3875,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:09:08.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-47ccab32-b225-422e-9a21-f19c4ddd81ab STEP: Creating a pod to test consume configMaps Apr 25 22:09:08.124: INFO: Waiting up to 5m0s for pod "pod-configmaps-7b92a262-2726-4d36-989c-69ad51e368af" in namespace "configmap-8990" to be "success or failure" Apr 25 22:09:08.147: INFO: Pod "pod-configmaps-7b92a262-2726-4d36-989c-69ad51e368af": Phase="Pending", Reason="", readiness=false. Elapsed: 22.302435ms Apr 25 22:09:10.190: INFO: Pod "pod-configmaps-7b92a262-2726-4d36-989c-69ad51e368af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065458117s Apr 25 22:09:12.193: INFO: Pod "pod-configmaps-7b92a262-2726-4d36-989c-69ad51e368af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.069212677s STEP: Saw pod success Apr 25 22:09:12.193: INFO: Pod "pod-configmaps-7b92a262-2726-4d36-989c-69ad51e368af" satisfied condition "success or failure" Apr 25 22:09:12.196: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-7b92a262-2726-4d36-989c-69ad51e368af container configmap-volume-test: STEP: delete the pod Apr 25 22:09:12.212: INFO: Waiting for pod pod-configmaps-7b92a262-2726-4d36-989c-69ad51e368af to disappear Apr 25 22:09:12.216: INFO: Pod pod-configmaps-7b92a262-2726-4d36-989c-69ad51e368af no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:09:12.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8990" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3883,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:09:12.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 22:09:12.314: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-31b64098-b82c-4420-9bef-84b5eed44022" in namespace "security-context-test-261" to be "success or failure" Apr 25 22:09:12.333: INFO: Pod "busybox-privileged-false-31b64098-b82c-4420-9bef-84b5eed44022": Phase="Pending", Reason="", readiness=false. Elapsed: 19.148769ms Apr 25 22:09:14.338: INFO: Pod "busybox-privileged-false-31b64098-b82c-4420-9bef-84b5eed44022": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023529262s Apr 25 22:09:16.342: INFO: Pod "busybox-privileged-false-31b64098-b82c-4420-9bef-84b5eed44022": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027839366s Apr 25 22:09:16.342: INFO: Pod "busybox-privileged-false-31b64098-b82c-4420-9bef-84b5eed44022" satisfied condition "success or failure" Apr 25 22:09:16.348: INFO: Got logs for pod "busybox-privileged-false-31b64098-b82c-4420-9bef-84b5eed44022": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:09:16.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-261" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3909,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:09:16.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 25 22:09:16.412: INFO: Waiting up to 5m0s for pod "pod-592c2755-89b8-4c80-a4fd-8540fedace4b" in namespace "emptydir-4659" to be "success or failure" Apr 25 22:09:16.427: INFO: Pod "pod-592c2755-89b8-4c80-a4fd-8540fedace4b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.556581ms Apr 25 22:09:18.431: INFO: Pod "pod-592c2755-89b8-4c80-a4fd-8540fedace4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019068972s Apr 25 22:09:20.435: INFO: Pod "pod-592c2755-89b8-4c80-a4fd-8540fedace4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023115573s STEP: Saw pod success Apr 25 22:09:20.436: INFO: Pod "pod-592c2755-89b8-4c80-a4fd-8540fedace4b" satisfied condition "success or failure" Apr 25 22:09:20.439: INFO: Trying to get logs from node jerma-worker pod pod-592c2755-89b8-4c80-a4fd-8540fedace4b container test-container: STEP: delete the pod Apr 25 22:09:20.459: INFO: Waiting for pod pod-592c2755-89b8-4c80-a4fd-8540fedace4b to disappear Apr 25 22:09:20.464: INFO: Pod pod-592c2755-89b8-4c80-a4fd-8540fedace4b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:09:20.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4659" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3912,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:09:20.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:09:26.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4598" for this suite. STEP: Destroying namespace "nsdeletetest-1994" for this suite. Apr 25 22:09:26.842: INFO: Namespace nsdeletetest-1994 was already deleted STEP: Destroying namespace "nsdeletetest-7974" for this suite. • [SLOW TEST:6.356 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":237,"skipped":3915,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:09:26.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:09:37.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-422" for this suite. • [SLOW TEST:11.117 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":238,"skipped":3934,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:09:37.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 Apr 25 22:09:38.017: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 25 22:09:38.041: INFO: Waiting for terminating namespaces to be deleted... Apr 25 22:09:38.044: INFO: Logging pods the kubelet thinks is on node jerma-worker before test Apr 25 22:09:38.050: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 22:09:38.050: INFO: Container kube-proxy ready: true, restart count 0 Apr 25 22:09:38.050: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 22:09:38.050: INFO: Container kindnet-cni ready: true, restart count 0 Apr 25 22:09:38.050: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test Apr 25 22:09:38.069: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) Apr 25 22:09:38.069: INFO: Container kube-hunter ready: false, restart count 0 Apr 25 22:09:38.069: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 22:09:38.069: INFO: Container kindnet-cni ready: true, restart count 0 Apr 25 22:09:38.069: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) Apr 25 22:09:38.069: INFO: Container kube-bench ready: false, restart count 0 Apr 25 22:09:38.069: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) Apr 25 22:09:38.069: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c0831487-0ffa-4c92-a7d9-f177fc30a0fe 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-c0831487-0ffa-4c92-a7d9-f177fc30a0fe off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-c0831487-0ffa-4c92-a7d9-f177fc30a0fe [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:09:54.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2337" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.366 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":239,"skipped":3954,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:09:54.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args Apr 25 22:09:54.432: INFO: Waiting up to 5m0s for pod "var-expansion-ea55105c-0d78-4c58-8161-a4d0bc4d7910" in namespace "var-expansion-3376" to be "success or failure" Apr 25 22:09:54.434: INFO: Pod "var-expansion-ea55105c-0d78-4c58-8161-a4d0bc4d7910": Phase="Pending", Reason="", readiness=false. Elapsed: 2.35206ms Apr 25 22:09:56.442: INFO: Pod "var-expansion-ea55105c-0d78-4c58-8161-a4d0bc4d7910": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009557858s Apr 25 22:09:58.454: INFO: Pod "var-expansion-ea55105c-0d78-4c58-8161-a4d0bc4d7910": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021700543s STEP: Saw pod success Apr 25 22:09:58.454: INFO: Pod "var-expansion-ea55105c-0d78-4c58-8161-a4d0bc4d7910" satisfied condition "success or failure" Apr 25 22:09:58.459: INFO: Trying to get logs from node jerma-worker pod var-expansion-ea55105c-0d78-4c58-8161-a4d0bc4d7910 container dapi-container: STEP: delete the pod Apr 25 22:09:58.472: INFO: Waiting for pod var-expansion-ea55105c-0d78-4c58-8161-a4d0bc4d7910 to disappear Apr 25 22:09:58.477: INFO: Pod var-expansion-ea55105c-0d78-4c58-8161-a4d0bc4d7910 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:09:58.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3376" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":240,"skipped":3978,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:09:58.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 25 22:09:58.590: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2903 /api/v1/namespaces/watch-2903/configmaps/e2e-watch-test-watch-closed 1042262a-88f1-4a40-acac-cbe44f28d4ba 11033196 0 2020-04-25 22:09:58 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 25 22:09:58.591: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2903 /api/v1/namespaces/watch-2903/configmaps/e2e-watch-test-watch-closed 1042262a-88f1-4a40-acac-cbe44f28d4ba 11033197 0 2020-04-25 22:09:58 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 25 22:09:58.612: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2903 /api/v1/namespaces/watch-2903/configmaps/e2e-watch-test-watch-closed 1042262a-88f1-4a40-acac-cbe44f28d4ba 11033198 0 2020-04-25 22:09:58 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 25 22:09:58.612: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-2903 /api/v1/namespaces/watch-2903/configmaps/e2e-watch-test-watch-closed 1042262a-88f1-4a40-acac-cbe44f28d4ba 11033199 0 2020-04-25 22:09:58 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:09:58.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2903" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":241,"skipped":3993,"failed":0} SSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:09:58.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-530.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-530.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-530.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-530.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-530.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-530.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 25 22:10:04.755: INFO: DNS probes using dns-530/dns-test-e5e35515-08fa-429e-bd91-4caf67e5a30a succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:10:04.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-530" for this suite. • [SLOW TEST:6.216 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":242,"skipped":3996,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:10:04.835: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions Apr 25 22:10:04.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 25 22:10:05.311: INFO: stderr: "" Apr 25 22:10:05.311: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:10:05.311: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1402" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":243,"skipped":3999,"failed":0} SS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:10:05.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-d004dc8a-790a-4bfb-84aa-2fa5dd9fc350 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-d004dc8a-790a-4bfb-84aa-2fa5dd9fc350 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:10:11.973: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8873" for this suite. • [SLOW TEST:6.661 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":4001,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:10:11.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:10:12.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-1908" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":245,"skipped":4024,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:10:12.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 25 22:10:12.234: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e5141f22-dcc8-4636-a95c-69f067e35d87" in namespace "projected-3151" to be "success or failure" Apr 25 22:10:12.251: INFO: Pod "downwardapi-volume-e5141f22-dcc8-4636-a95c-69f067e35d87": Phase="Pending", Reason="", readiness=false. Elapsed: 17.18413ms Apr 25 22:10:14.332: INFO: Pod "downwardapi-volume-e5141f22-dcc8-4636-a95c-69f067e35d87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098669263s Apr 25 22:10:16.337: INFO: Pod "downwardapi-volume-e5141f22-dcc8-4636-a95c-69f067e35d87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103089375s STEP: Saw pod success Apr 25 22:10:16.337: INFO: Pod "downwardapi-volume-e5141f22-dcc8-4636-a95c-69f067e35d87" satisfied condition "success or failure" Apr 25 22:10:16.340: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-e5141f22-dcc8-4636-a95c-69f067e35d87 container client-container: STEP: delete the pod Apr 25 22:10:16.360: INFO: Waiting for pod downwardapi-volume-e5141f22-dcc8-4636-a95c-69f067e35d87 to disappear Apr 25 22:10:16.412: INFO: Pod downwardapi-volume-e5141f22-dcc8-4636-a95c-69f067e35d87 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:10:16.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3151" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":4048,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:10:16.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 25 22:10:23.576: INFO: 0 pods remaining Apr 25 22:10:23.576: INFO: 0 pods has nil DeletionTimestamp Apr 25 22:10:23.576: INFO: Apr 25 22:10:23.859: INFO: 0 pods remaining Apr 25 22:10:23.859: INFO: 0 pods has nil DeletionTimestamp Apr 25 22:10:23.859: INFO: STEP: Gathering metrics W0425 22:10:25.180147 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 25 22:10:25.180: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:10:25.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-731" for this suite. • [SLOW TEST:8.781 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":247,"skipped":4049,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:10:25.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin Apr 25 22:10:25.379: INFO: Waiting up to 5m0s for pod "downwardapi-volume-04c63d1e-c450-4105-bd03-de0c1341184e" in namespace "downward-api-4368" to be "success or failure" Apr 25 22:10:25.712: INFO: Pod "downwardapi-volume-04c63d1e-c450-4105-bd03-de0c1341184e": Phase="Pending", Reason="", readiness=false. Elapsed: 332.889786ms Apr 25 22:10:27.716: INFO: Pod "downwardapi-volume-04c63d1e-c450-4105-bd03-de0c1341184e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.336600742s Apr 25 22:10:29.720: INFO: Pod "downwardapi-volume-04c63d1e-c450-4105-bd03-de0c1341184e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.340723703s STEP: Saw pod success Apr 25 22:10:29.720: INFO: Pod "downwardapi-volume-04c63d1e-c450-4105-bd03-de0c1341184e" satisfied condition "success or failure" Apr 25 22:10:29.722: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-04c63d1e-c450-4105-bd03-de0c1341184e container client-container: STEP: delete the pod Apr 25 22:10:29.781: INFO: Waiting for pod downwardapi-volume-04c63d1e-c450-4105-bd03-de0c1341184e to disappear Apr 25 22:10:29.807: INFO: Pod downwardapi-volume-04c63d1e-c450-4105-bd03-de0c1341184e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:10:29.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4368" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":4070,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:10:29.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-fda4571b-daa1-4121-8799-11e583918682 STEP: Creating a pod to test consume configMaps Apr 25 22:10:29.998: INFO: Waiting up to 5m0s for pod "pod-configmaps-cf688fee-3316-4807-9767-95a12b1f2dbb" in namespace "configmap-1412" to be "success or failure" Apr 25 22:10:30.011: INFO: Pod "pod-configmaps-cf688fee-3316-4807-9767-95a12b1f2dbb": Phase="Pending", Reason="", readiness=false. Elapsed: 13.372955ms Apr 25 22:10:32.095: INFO: Pod "pod-configmaps-cf688fee-3316-4807-9767-95a12b1f2dbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096930383s Apr 25 22:10:34.099: INFO: Pod "pod-configmaps-cf688fee-3316-4807-9767-95a12b1f2dbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100786057s STEP: Saw pod success Apr 25 22:10:34.099: INFO: Pod "pod-configmaps-cf688fee-3316-4807-9767-95a12b1f2dbb" satisfied condition "success or failure" Apr 25 22:10:34.102: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-cf688fee-3316-4807-9767-95a12b1f2dbb container configmap-volume-test: STEP: delete the pod Apr 25 22:10:34.118: INFO: Waiting for pod pod-configmaps-cf688fee-3316-4807-9767-95a12b1f2dbb to disappear Apr 25 22:10:34.134: INFO: Pod pod-configmaps-cf688fee-3316-4807-9767-95a12b1f2dbb no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:10:34.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1412" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":4078,"failed":0} SSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:10:34.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 25 22:10:34.235: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. Apr 25 22:10:35.027: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 25 22:10:37.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449435, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449435, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449435, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449435, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 22:10:39.740: INFO: Waited 617.155451ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:10:40.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9419" for this suite. • [SLOW TEST:6.691 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":250,"skipped":4082,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:10:40.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 22:10:40.893: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 25 22:10:43.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7945 create -f -' Apr 25 22:10:46.848: INFO: stderr: "" Apr 25 22:10:46.848: INFO: stdout: "e2e-test-crd-publish-openapi-3560-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 25 22:10:46.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7945 delete e2e-test-crd-publish-openapi-3560-crds test-cr' Apr 25 22:10:46.979: INFO: stderr: "" Apr 25 22:10:46.979: INFO: stdout: "e2e-test-crd-publish-openapi-3560-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 25 22:10:46.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7945 apply -f -' Apr 25 22:10:47.259: INFO: stderr: "" Apr 25 22:10:47.259: INFO: stdout: "e2e-test-crd-publish-openapi-3560-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 25 22:10:47.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7945 delete e2e-test-crd-publish-openapi-3560-crds test-cr' Apr 25 22:10:47.362: INFO: stderr: "" Apr 25 22:10:47.362: INFO: stdout: "e2e-test-crd-publish-openapi-3560-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Apr 25 22:10:47.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3560-crds' Apr 25 22:10:47.611: INFO: stderr: "" Apr 25 22:10:47.611: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3560-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:10:50.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7945" for this suite. • [SLOW TEST:9.698 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":251,"skipped":4083,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:10:50.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:11:06.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5492" for this suite. • [SLOW TEST:16.129 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":252,"skipped":4086,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:11:06.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 22:11:07.265: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 22:11:09.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449467, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449467, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449467, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449467, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 22:11:12.412: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:11:12.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5913" for this suite. STEP: Destroying namespace "webhook-5913-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.014 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":253,"skipped":4088,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:11:12.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:11:18.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-168" for this suite. • [SLOW TEST:5.457 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":254,"skipped":4124,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:11:18.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD Apr 25 22:11:18.210: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:11:33.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8720" for this suite. • [SLOW TEST:14.910 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":255,"skipped":4124,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:11:33.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Apr 25 22:11:33.614: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 25 22:11:35.660: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449493, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449493, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449493, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449493, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 22:11:37.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449493, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449493, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449493, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449493, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Apr 25 22:11:40.687: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:11:41.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8035" for this suite. STEP: Destroying namespace "webhook-8035-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.285 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":256,"skipped":4128,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:11:41.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9541 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9541 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9541 Apr 25 22:11:41.467: INFO: Found 0 stateful pods, waiting for 1 Apr 25 22:11:51.471: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 25 22:11:51.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9541 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 25 22:11:51.747: INFO: stderr: "I0425 22:11:51.629982 3125 log.go:172] (0xc000107340) (0xc00092e140) Create stream\nI0425 22:11:51.630035 3125 log.go:172] (0xc000107340) (0xc00092e140) Stream added, broadcasting: 1\nI0425 22:11:51.632991 3125 log.go:172] (0xc000107340) Reply frame received for 1\nI0425 22:11:51.633041 3125 log.go:172] (0xc000107340) (0xc0009d0000) Create stream\nI0425 22:11:51.633056 3125 log.go:172] (0xc000107340) (0xc0009d0000) Stream added, broadcasting: 3\nI0425 22:11:51.634329 3125 log.go:172] (0xc000107340) Reply frame received for 3\nI0425 22:11:51.634405 3125 log.go:172] (0xc000107340) (0xc000423400) Create stream\nI0425 22:11:51.634433 3125 log.go:172] (0xc000107340) (0xc000423400) Stream added, broadcasting: 5\nI0425 22:11:51.635648 3125 log.go:172] (0xc000107340) Reply frame received for 5\nI0425 22:11:51.713321 3125 log.go:172] (0xc000107340) Data frame received for 5\nI0425 22:11:51.713345 3125 log.go:172] (0xc000423400) (5) Data frame handling\nI0425 22:11:51.713357 3125 log.go:172] (0xc000423400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0425 22:11:51.738161 3125 log.go:172] (0xc000107340) Data frame received for 3\nI0425 22:11:51.738205 3125 log.go:172] (0xc0009d0000) (3) Data frame handling\nI0425 22:11:51.738224 3125 log.go:172] (0xc0009d0000) (3) Data frame sent\nI0425 22:11:51.738242 3125 log.go:172] (0xc000107340) Data frame received for 3\nI0425 22:11:51.738257 3125 log.go:172] (0xc0009d0000) (3) Data frame handling\nI0425 22:11:51.738318 3125 log.go:172] (0xc000107340) Data frame received for 5\nI0425 22:11:51.738366 3125 log.go:172] (0xc000423400) (5) Data frame handling\nI0425 22:11:51.740479 3125 log.go:172] (0xc000107340) Data frame received for 1\nI0425 22:11:51.740516 3125 log.go:172] (0xc00092e140) (1) Data frame handling\nI0425 22:11:51.740549 3125 log.go:172] (0xc00092e140) (1) Data frame sent\nI0425 22:11:51.740571 3125 log.go:172] (0xc000107340) (0xc00092e140) Stream removed, broadcasting: 1\nI0425 22:11:51.740613 3125 log.go:172] (0xc000107340) Go away received\nI0425 22:11:51.741631 3125 log.go:172] (0xc000107340) (0xc00092e140) Stream removed, broadcasting: 1\nI0425 22:11:51.741661 3125 log.go:172] (0xc000107340) (0xc0009d0000) Stream removed, broadcasting: 3\nI0425 22:11:51.741676 3125 log.go:172] (0xc000107340) (0xc000423400) Stream removed, broadcasting: 5\n" Apr 25 22:11:51.747: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 25 22:11:51.747: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 25 22:11:51.751: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 25 22:12:01.755: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 25 22:12:01.755: INFO: Waiting for statefulset status.replicas updated to 0 Apr 25 22:12:01.791: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999505s Apr 25 22:12:02.795: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.995566332s Apr 25 22:12:03.799: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.990754593s Apr 25 22:12:04.804: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.986956179s Apr 25 22:12:05.808: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.982050528s Apr 25 22:12:06.869: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.977726342s Apr 25 22:12:07.874: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.916994006s Apr 25 22:12:08.878: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.912349249s Apr 25 22:12:09.883: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.907871009s Apr 25 22:12:11.121: INFO: Verifying statefulset ss doesn't scale past 1 for another 903.062267ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9541 Apr 25 22:12:12.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9541 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 25 22:12:12.346: INFO: stderr: "I0425 22:12:12.258987 3147 log.go:172] (0xc0000e2a50) (0xc000675e00) Create stream\nI0425 22:12:12.259054 3147 log.go:172] (0xc0000e2a50) (0xc000675e00) Stream added, broadcasting: 1\nI0425 22:12:12.262048 3147 log.go:172] (0xc0000e2a50) Reply frame received for 1\nI0425 22:12:12.262101 3147 log.go:172] (0xc0000e2a50) (0xc00059c5a0) Create stream\nI0425 22:12:12.262115 3147 log.go:172] (0xc0000e2a50) (0xc00059c5a0) Stream added, broadcasting: 3\nI0425 22:12:12.263081 3147 log.go:172] (0xc0000e2a50) Reply frame received for 3\nI0425 22:12:12.263108 3147 log.go:172] (0xc0000e2a50) (0xc0001f3360) Create stream\nI0425 22:12:12.263116 3147 log.go:172] (0xc0000e2a50) (0xc0001f3360) Stream added, broadcasting: 5\nI0425 22:12:12.263957 3147 log.go:172] (0xc0000e2a50) Reply frame received for 5\nI0425 22:12:12.339567 3147 log.go:172] (0xc0000e2a50) Data frame received for 5\nI0425 22:12:12.339615 3147 log.go:172] (0xc0001f3360) (5) Data frame handling\nI0425 22:12:12.339630 3147 log.go:172] (0xc0001f3360) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0425 22:12:12.339664 3147 log.go:172] (0xc0000e2a50) Data frame received for 3\nI0425 22:12:12.339707 3147 log.go:172] (0xc00059c5a0) (3) Data frame handling\nI0425 22:12:12.339725 3147 log.go:172] (0xc00059c5a0) (3) Data frame sent\nI0425 22:12:12.339736 3147 log.go:172] (0xc0000e2a50) Data frame received for 3\nI0425 22:12:12.339746 3147 log.go:172] (0xc00059c5a0) (3) Data frame handling\nI0425 22:12:12.339785 3147 log.go:172] (0xc0000e2a50) Data frame received for 5\nI0425 22:12:12.339812 3147 log.go:172] (0xc0001f3360) (5) Data frame handling\nI0425 22:12:12.340758 3147 log.go:172] (0xc0000e2a50) Data frame received for 1\nI0425 22:12:12.340778 3147 log.go:172] (0xc000675e00) (1) Data frame handling\nI0425 22:12:12.340800 3147 log.go:172] (0xc000675e00) (1) Data frame sent\nI0425 22:12:12.340814 3147 log.go:172] (0xc0000e2a50) (0xc000675e00) Stream removed, broadcasting: 1\nI0425 22:12:12.340825 3147 log.go:172] (0xc0000e2a50) Go away received\nI0425 22:12:12.341330 3147 log.go:172] (0xc0000e2a50) (0xc000675e00) Stream removed, broadcasting: 1\nI0425 22:12:12.341348 3147 log.go:172] (0xc0000e2a50) (0xc00059c5a0) Stream removed, broadcasting: 3\nI0425 22:12:12.341358 3147 log.go:172] (0xc0000e2a50) (0xc0001f3360) Stream removed, broadcasting: 5\n" Apr 25 22:12:12.346: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 25 22:12:12.346: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 25 22:12:12.349: INFO: Found 1 stateful pods, waiting for 3 Apr 25 22:12:22.354: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 25 22:12:22.354: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 25 22:12:22.354: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 25 22:12:22.361: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9541 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 25 22:12:22.592: INFO: stderr: "I0425 22:12:22.497280 3170 log.go:172] (0xc0007c8a50) (0xc0009ce000) Create stream\nI0425 22:12:22.497345 3170 log.go:172] (0xc0007c8a50) (0xc0009ce000) Stream added, broadcasting: 1\nI0425 22:12:22.500699 3170 log.go:172] (0xc0007c8a50) Reply frame received for 1\nI0425 22:12:22.500767 3170 log.go:172] (0xc0007c8a50) (0xc00065dc20) Create stream\nI0425 22:12:22.500799 3170 log.go:172] (0xc0007c8a50) (0xc00065dc20) Stream added, broadcasting: 3\nI0425 22:12:22.501924 3170 log.go:172] (0xc0007c8a50) Reply frame received for 3\nI0425 22:12:22.501973 3170 log.go:172] (0xc0007c8a50) (0xc00065de00) Create stream\nI0425 22:12:22.501988 3170 log.go:172] (0xc0007c8a50) (0xc00065de00) Stream added, broadcasting: 5\nI0425 22:12:22.502852 3170 log.go:172] (0xc0007c8a50) Reply frame received for 5\nI0425 22:12:22.584161 3170 log.go:172] (0xc0007c8a50) Data frame received for 5\nI0425 22:12:22.584198 3170 log.go:172] (0xc00065de00) (5) Data frame handling\nI0425 22:12:22.584220 3170 log.go:172] (0xc00065de00) (5) Data frame sent\nI0425 22:12:22.584232 3170 log.go:172] (0xc0007c8a50) Data frame received for 5\nI0425 22:12:22.584243 3170 log.go:172] (0xc00065de00) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0425 22:12:22.584451 3170 log.go:172] (0xc0007c8a50) Data frame received for 3\nI0425 22:12:22.584490 3170 log.go:172] (0xc00065dc20) (3) Data frame handling\nI0425 22:12:22.584520 3170 log.go:172] (0xc00065dc20) (3) Data frame sent\nI0425 22:12:22.584545 3170 log.go:172] (0xc0007c8a50) Data frame received for 3\nI0425 22:12:22.584559 3170 log.go:172] (0xc00065dc20) (3) Data frame handling\nI0425 22:12:22.586319 3170 log.go:172] (0xc0007c8a50) Data frame received for 1\nI0425 22:12:22.586341 3170 log.go:172] (0xc0009ce000) (1) Data frame handling\nI0425 22:12:22.586354 3170 log.go:172] (0xc0009ce000) (1) Data frame sent\nI0425 22:12:22.586424 3170 log.go:172] (0xc0007c8a50) (0xc0009ce000) Stream removed, broadcasting: 1\nI0425 22:12:22.586664 3170 log.go:172] (0xc0007c8a50) Go away received\nI0425 22:12:22.586839 3170 log.go:172] (0xc0007c8a50) (0xc0009ce000) Stream removed, broadcasting: 1\nI0425 22:12:22.586862 3170 log.go:172] (0xc0007c8a50) (0xc00065dc20) Stream removed, broadcasting: 3\nI0425 22:12:22.586882 3170 log.go:172] (0xc0007c8a50) (0xc00065de00) Stream removed, broadcasting: 5\n" Apr 25 22:12:22.592: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 25 22:12:22.592: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 25 22:12:22.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9541 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 25 22:12:22.845: INFO: stderr: "I0425 22:12:22.741746 3192 log.go:172] (0xc0009f6dc0) (0xc0009648c0) Create stream\nI0425 22:12:22.741831 3192 log.go:172] (0xc0009f6dc0) (0xc0009648c0) Stream added, broadcasting: 1\nI0425 22:12:22.752023 3192 log.go:172] (0xc0009f6dc0) Reply frame received for 1\nI0425 22:12:22.752080 3192 log.go:172] (0xc0009f6dc0) (0xc000552780) Create stream\nI0425 22:12:22.752090 3192 log.go:172] (0xc0009f6dc0) (0xc000552780) Stream added, broadcasting: 3\nI0425 22:12:22.755213 3192 log.go:172] (0xc0009f6dc0) Reply frame received for 3\nI0425 22:12:22.755264 3192 log.go:172] (0xc0009f6dc0) (0xc00073f540) Create stream\nI0425 22:12:22.755274 3192 log.go:172] (0xc0009f6dc0) (0xc00073f540) Stream added, broadcasting: 5\nI0425 22:12:22.756431 3192 log.go:172] (0xc0009f6dc0) Reply frame received for 5\nI0425 22:12:22.813987 3192 log.go:172] (0xc0009f6dc0) Data frame received for 5\nI0425 22:12:22.814033 3192 log.go:172] (0xc00073f540) (5) Data frame handling\nI0425 22:12:22.814055 3192 log.go:172] (0xc00073f540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0425 22:12:22.837750 3192 log.go:172] (0xc0009f6dc0) Data frame received for 3\nI0425 22:12:22.837770 3192 log.go:172] (0xc000552780) (3) Data frame handling\nI0425 22:12:22.837783 3192 log.go:172] (0xc000552780) (3) Data frame sent\nI0425 22:12:22.837792 3192 log.go:172] (0xc0009f6dc0) Data frame received for 3\nI0425 22:12:22.837806 3192 log.go:172] (0xc000552780) (3) Data frame handling\nI0425 22:12:22.837965 3192 log.go:172] (0xc0009f6dc0) Data frame received for 5\nI0425 22:12:22.837978 3192 log.go:172] (0xc00073f540) (5) Data frame handling\nI0425 22:12:22.839877 3192 log.go:172] (0xc0009f6dc0) Data frame received for 1\nI0425 22:12:22.839900 3192 log.go:172] (0xc0009648c0) (1) Data frame handling\nI0425 22:12:22.839912 3192 log.go:172] (0xc0009648c0) (1) Data frame sent\nI0425 22:12:22.839924 3192 log.go:172] (0xc0009f6dc0) (0xc0009648c0) Stream removed, broadcasting: 1\nI0425 22:12:22.839983 3192 log.go:172] (0xc0009f6dc0) Go away received\nI0425 22:12:22.840342 3192 log.go:172] (0xc0009f6dc0) (0xc0009648c0) Stream removed, broadcasting: 1\nI0425 22:12:22.840363 3192 log.go:172] (0xc0009f6dc0) (0xc000552780) Stream removed, broadcasting: 3\nI0425 22:12:22.840380 3192 log.go:172] (0xc0009f6dc0) (0xc00073f540) Stream removed, broadcasting: 5\n" Apr 25 22:12:22.845: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 25 22:12:22.845: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 25 22:12:22.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9541 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 25 22:12:23.086: INFO: stderr: "I0425 22:12:22.981254 3212 log.go:172] (0xc000998c60) (0xc0009ca8c0) Create stream\nI0425 22:12:22.981326 3212 log.go:172] (0xc000998c60) (0xc0009ca8c0) Stream added, broadcasting: 1\nI0425 22:12:22.985684 3212 log.go:172] (0xc000998c60) Reply frame received for 1\nI0425 22:12:22.985722 3212 log.go:172] (0xc000998c60) (0xc0009ca960) Create stream\nI0425 22:12:22.985741 3212 log.go:172] (0xc000998c60) (0xc0009ca960) Stream added, broadcasting: 3\nI0425 22:12:22.986548 3212 log.go:172] (0xc000998c60) Reply frame received for 3\nI0425 22:12:22.986569 3212 log.go:172] (0xc000998c60) (0xc0009caa00) Create stream\nI0425 22:12:22.986584 3212 log.go:172] (0xc000998c60) (0xc0009caa00) Stream added, broadcasting: 5\nI0425 22:12:22.987386 3212 log.go:172] (0xc000998c60) Reply frame received for 5\nI0425 22:12:23.050962 3212 log.go:172] (0xc000998c60) Data frame received for 5\nI0425 22:12:23.050984 3212 log.go:172] (0xc0009caa00) (5) Data frame handling\nI0425 22:12:23.050996 3212 log.go:172] (0xc0009caa00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0425 22:12:23.077780 3212 log.go:172] (0xc000998c60) Data frame received for 5\nI0425 22:12:23.077799 3212 log.go:172] (0xc0009caa00) (5) Data frame handling\nI0425 22:12:23.077866 3212 log.go:172] (0xc000998c60) Data frame received for 3\nI0425 22:12:23.077911 3212 log.go:172] (0xc0009ca960) (3) Data frame handling\nI0425 22:12:23.077935 3212 log.go:172] (0xc0009ca960) (3) Data frame sent\nI0425 22:12:23.077951 3212 log.go:172] (0xc000998c60) Data frame received for 3\nI0425 22:12:23.077960 3212 log.go:172] (0xc0009ca960) (3) Data frame handling\nI0425 22:12:23.079737 3212 log.go:172] (0xc000998c60) Data frame received for 1\nI0425 22:12:23.079766 3212 log.go:172] (0xc0009ca8c0) (1) Data frame handling\nI0425 22:12:23.079784 3212 log.go:172] (0xc0009ca8c0) (1) Data frame sent\nI0425 22:12:23.079805 3212 log.go:172] (0xc000998c60) (0xc0009ca8c0) Stream removed, broadcasting: 1\nI0425 22:12:23.079825 3212 log.go:172] (0xc000998c60) Go away received\nI0425 22:12:23.080284 3212 log.go:172] (0xc000998c60) (0xc0009ca8c0) Stream removed, broadcasting: 1\nI0425 22:12:23.080307 3212 log.go:172] (0xc000998c60) (0xc0009ca960) Stream removed, broadcasting: 3\nI0425 22:12:23.080319 3212 log.go:172] (0xc000998c60) (0xc0009caa00) Stream removed, broadcasting: 5\n" Apr 25 22:12:23.086: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 25 22:12:23.086: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 25 22:12:23.086: INFO: Waiting for statefulset status.replicas updated to 0 Apr 25 22:12:23.089: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 25 22:12:33.097: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 25 22:12:33.097: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 25 22:12:33.097: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 25 22:12:33.109: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999575s Apr 25 22:12:34.114: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994889474s Apr 25 22:12:35.118: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989838278s Apr 25 22:12:36.123: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985563093s Apr 25 22:12:37.128: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.98017195s Apr 25 22:12:38.133: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.97562285s Apr 25 22:12:39.138: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.970330476s Apr 25 22:12:40.143: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.966149353s Apr 25 22:12:41.148: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.961129905s Apr 25 22:12:42.153: INFO: Verifying statefulset ss doesn't scale past 3 for another 956.099543ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9541 Apr 25 22:12:43.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9541 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 25 22:12:43.380: INFO: stderr: "I0425 22:12:43.283589 3232 log.go:172] (0xc0003d6dc0) (0xc0005f5b80) Create stream\nI0425 22:12:43.283637 3232 log.go:172] (0xc0003d6dc0) (0xc0005f5b80) Stream added, broadcasting: 1\nI0425 22:12:43.286177 3232 log.go:172] (0xc0003d6dc0) Reply frame received for 1\nI0425 22:12:43.286209 3232 log.go:172] (0xc0003d6dc0) (0xc0008f0000) Create stream\nI0425 22:12:43.286221 3232 log.go:172] (0xc0003d6dc0) (0xc0008f0000) Stream added, broadcasting: 3\nI0425 22:12:43.287241 3232 log.go:172] (0xc0003d6dc0) Reply frame received for 3\nI0425 22:12:43.287284 3232 log.go:172] (0xc0003d6dc0) (0xc0008fa000) Create stream\nI0425 22:12:43.287299 3232 log.go:172] (0xc0003d6dc0) (0xc0008fa000) Stream added, broadcasting: 5\nI0425 22:12:43.288171 3232 log.go:172] (0xc0003d6dc0) Reply frame received for 5\nI0425 22:12:43.374488 3232 log.go:172] (0xc0003d6dc0) Data frame received for 5\nI0425 22:12:43.374547 3232 log.go:172] (0xc0008fa000) (5) Data frame handling\nI0425 22:12:43.374573 3232 log.go:172] (0xc0008fa000) (5) Data frame sent\nI0425 22:12:43.374595 3232 log.go:172] (0xc0003d6dc0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0425 22:12:43.374617 3232 log.go:172] (0xc0008fa000) (5) Data frame handling\nI0425 22:12:43.374703 3232 log.go:172] (0xc0003d6dc0) Data frame received for 3\nI0425 22:12:43.374736 3232 log.go:172] (0xc0008f0000) (3) Data frame handling\nI0425 22:12:43.374764 3232 log.go:172] (0xc0008f0000) (3) Data frame sent\nI0425 22:12:43.374780 3232 log.go:172] (0xc0003d6dc0) Data frame received for 3\nI0425 22:12:43.374797 3232 log.go:172] (0xc0008f0000) (3) Data frame handling\nI0425 22:12:43.376159 3232 log.go:172] (0xc0003d6dc0) Data frame received for 1\nI0425 22:12:43.376185 3232 log.go:172] (0xc0005f5b80) (1) Data frame handling\nI0425 22:12:43.376200 3232 log.go:172] (0xc0005f5b80) (1) Data frame sent\nI0425 22:12:43.376211 3232 log.go:172] (0xc0003d6dc0) (0xc0005f5b80) Stream removed, broadcasting: 1\nI0425 22:12:43.376553 3232 log.go:172] (0xc0003d6dc0) (0xc0005f5b80) Stream removed, broadcasting: 1\nI0425 22:12:43.376572 3232 log.go:172] (0xc0003d6dc0) (0xc0008f0000) Stream removed, broadcasting: 3\nI0425 22:12:43.376581 3232 log.go:172] (0xc0003d6dc0) (0xc0008fa000) Stream removed, broadcasting: 5\n" Apr 25 22:12:43.380: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 25 22:12:43.380: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 25 22:12:43.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9541 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 25 22:12:43.583: INFO: stderr: "I0425 22:12:43.509023 3251 log.go:172] (0xc0009d8630) (0xc0009a8140) Create stream\nI0425 22:12:43.509091 3251 log.go:172] (0xc0009d8630) (0xc0009a8140) Stream added, broadcasting: 1\nI0425 22:12:43.511871 3251 log.go:172] (0xc0009d8630) Reply frame received for 1\nI0425 22:12:43.511901 3251 log.go:172] (0xc0009d8630) (0xc000b14000) Create stream\nI0425 22:12:43.511910 3251 log.go:172] (0xc0009d8630) (0xc000b14000) Stream added, broadcasting: 3\nI0425 22:12:43.512635 3251 log.go:172] (0xc0009d8630) Reply frame received for 3\nI0425 22:12:43.512657 3251 log.go:172] (0xc0009d8630) (0xc0009a81e0) Create stream\nI0425 22:12:43.512671 3251 log.go:172] (0xc0009d8630) (0xc0009a81e0) Stream added, broadcasting: 5\nI0425 22:12:43.513653 3251 log.go:172] (0xc0009d8630) Reply frame received for 5\nI0425 22:12:43.577384 3251 log.go:172] (0xc0009d8630) Data frame received for 3\nI0425 22:12:43.577414 3251 log.go:172] (0xc000b14000) (3) Data frame handling\nI0425 22:12:43.577437 3251 log.go:172] (0xc000b14000) (3) Data frame sent\nI0425 22:12:43.577446 3251 log.go:172] (0xc0009d8630) Data frame received for 3\nI0425 22:12:43.577452 3251 log.go:172] (0xc000b14000) (3) Data frame handling\nI0425 22:12:43.577460 3251 log.go:172] (0xc0009d8630) Data frame received for 5\nI0425 22:12:43.577469 3251 log.go:172] (0xc0009a81e0) (5) Data frame handling\nI0425 22:12:43.577485 3251 log.go:172] (0xc0009a81e0) (5) Data frame sent\nI0425 22:12:43.577493 3251 log.go:172] (0xc0009d8630) Data frame received for 5\nI0425 22:12:43.577497 3251 log.go:172] (0xc0009a81e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0425 22:12:43.579063 3251 log.go:172] (0xc0009d8630) Data frame received for 1\nI0425 22:12:43.579077 3251 log.go:172] (0xc0009a8140) (1) Data frame handling\nI0425 22:12:43.579090 3251 log.go:172] (0xc0009a8140) (1) Data frame sent\nI0425 22:12:43.579123 3251 log.go:172] (0xc0009d8630) (0xc0009a8140) Stream removed, broadcasting: 1\nI0425 22:12:43.579310 3251 log.go:172] (0xc0009d8630) Go away received\nI0425 22:12:43.579415 3251 log.go:172] (0xc0009d8630) (0xc0009a8140) Stream removed, broadcasting: 1\nI0425 22:12:43.579426 3251 log.go:172] (0xc0009d8630) (0xc000b14000) Stream removed, broadcasting: 3\nI0425 22:12:43.579432 3251 log.go:172] (0xc0009d8630) (0xc0009a81e0) Stream removed, broadcasting: 5\n" Apr 25 22:12:43.584: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 25 22:12:43.584: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 25 22:12:43.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9541 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 25 22:12:43.784: INFO: stderr: "I0425 22:12:43.712929 3271 log.go:172] (0xc000104dc0) (0xc000a12000) Create stream\nI0425 22:12:43.712982 3271 log.go:172] (0xc000104dc0) (0xc000a12000) Stream added, broadcasting: 1\nI0425 22:12:43.715875 3271 log.go:172] (0xc000104dc0) Reply frame received for 1\nI0425 22:12:43.715926 3271 log.go:172] (0xc000104dc0) (0xc000646000) Create stream\nI0425 22:12:43.715944 3271 log.go:172] (0xc000104dc0) (0xc000646000) Stream added, broadcasting: 3\nI0425 22:12:43.716898 3271 log.go:172] (0xc000104dc0) Reply frame received for 3\nI0425 22:12:43.716944 3271 log.go:172] (0xc000104dc0) (0xc000a120a0) Create stream\nI0425 22:12:43.716963 3271 log.go:172] (0xc000104dc0) (0xc000a120a0) Stream added, broadcasting: 5\nI0425 22:12:43.717962 3271 log.go:172] (0xc000104dc0) Reply frame received for 5\nI0425 22:12:43.777009 3271 log.go:172] (0xc000104dc0) Data frame received for 5\nI0425 22:12:43.777053 3271 log.go:172] (0xc000a120a0) (5) Data frame handling\nI0425 22:12:43.777067 3271 log.go:172] (0xc000a120a0) (5) Data frame sent\nI0425 22:12:43.777078 3271 log.go:172] (0xc000104dc0) Data frame received for 5\nI0425 22:12:43.777087 3271 log.go:172] (0xc000a120a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0425 22:12:43.777257 3271 log.go:172] (0xc000104dc0) Data frame received for 3\nI0425 22:12:43.777294 3271 log.go:172] (0xc000646000) (3) Data frame handling\nI0425 22:12:43.777327 3271 log.go:172] (0xc000646000) (3) Data frame sent\nI0425 22:12:43.777363 3271 log.go:172] (0xc000104dc0) Data frame received for 3\nI0425 22:12:43.777382 3271 log.go:172] (0xc000646000) (3) Data frame handling\nI0425 22:12:43.778642 3271 log.go:172] (0xc000104dc0) Data frame received for 1\nI0425 22:12:43.778666 3271 log.go:172] (0xc000a12000) (1) Data frame handling\nI0425 22:12:43.778688 3271 log.go:172] (0xc000a12000) (1) Data frame sent\nI0425 22:12:43.778702 3271 log.go:172] (0xc000104dc0) (0xc000a12000) Stream removed, broadcasting: 1\nI0425 22:12:43.778731 3271 log.go:172] (0xc000104dc0) Go away received\nI0425 22:12:43.779163 3271 log.go:172] (0xc000104dc0) (0xc000a12000) Stream removed, broadcasting: 1\nI0425 22:12:43.779199 3271 log.go:172] (0xc000104dc0) (0xc000646000) Stream removed, broadcasting: 3\nI0425 22:12:43.779221 3271 log.go:172] (0xc000104dc0) (0xc000a120a0) Stream removed, broadcasting: 5\n" Apr 25 22:12:43.784: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 25 22:12:43.784: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 25 22:12:43.784: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Apr 25 22:13:03.802: INFO: Deleting all statefulset in ns statefulset-9541 Apr 25 22:13:03.805: INFO: Scaling statefulset ss to 0 Apr 25 22:13:03.813: INFO: Waiting for statefulset status.replicas updated to 0 Apr 25 22:13:03.815: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:13:03.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9541" for this suite. • [SLOW TEST:82.539 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":257,"skipped":4178,"failed":0} [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:13:03.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-be2d9962-0f7e-47b0-8994-7fc761799ab2 STEP: Creating configMap with name cm-test-opt-upd-576d2a92-58d8-4065-8be6-18210f954ae2 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-be2d9962-0f7e-47b0-8994-7fc761799ab2 STEP: Updating configmap cm-test-opt-upd-576d2a92-58d8-4065-8be6-18210f954ae2 STEP: Creating configMap with name cm-test-opt-create-d90434e3-402b-428b-b208-fd9f61e2c9c6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:14:24.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2706" for this suite. • [SLOW TEST:80.530 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4178,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:14:24.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:14:31.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7833" for this suite. • [SLOW TEST:7.286 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":259,"skipped":4192,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:14:31.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 25 22:14:31.813: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2076 /api/v1/namespaces/watch-2076/configmaps/e2e-watch-test-label-changed 5032b468-d91f-4d20-a674-fb4ffc7eecbb 11035028 0 2020-04-25 22:14:31 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 25 22:14:31.813: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2076 /api/v1/namespaces/watch-2076/configmaps/e2e-watch-test-label-changed 5032b468-d91f-4d20-a674-fb4ffc7eecbb 11035029 0 2020-04-25 22:14:31 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 25 22:14:31.813: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2076 /api/v1/namespaces/watch-2076/configmaps/e2e-watch-test-label-changed 5032b468-d91f-4d20-a674-fb4ffc7eecbb 11035030 0 2020-04-25 22:14:31 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 25 22:14:41.892: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2076 /api/v1/namespaces/watch-2076/configmaps/e2e-watch-test-label-changed 5032b468-d91f-4d20-a674-fb4ffc7eecbb 11035074 0 2020-04-25 22:14:31 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 25 22:14:41.892: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2076 /api/v1/namespaces/watch-2076/configmaps/e2e-watch-test-label-changed 5032b468-d91f-4d20-a674-fb4ffc7eecbb 11035075 0 2020-04-25 22:14:31 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 25 22:14:41.892: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2076 /api/v1/namespaces/watch-2076/configmaps/e2e-watch-test-label-changed 5032b468-d91f-4d20-a674-fb4ffc7eecbb 11035076 0 2020-04-25 22:14:31 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:14:41.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2076" for this suite. • [SLOW TEST:10.219 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":260,"skipped":4230,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:14:41.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-vsp9 STEP: Creating a pod to test atomic-volume-subpath Apr 25 22:14:42.052: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-vsp9" in namespace "subpath-4340" to be "success or failure" Apr 25 22:14:42.055: INFO: Pod "pod-subpath-test-secret-vsp9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.817975ms Apr 25 22:14:44.060: INFO: Pod "pod-subpath-test-secret-vsp9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008457058s Apr 25 22:14:46.064: INFO: Pod "pod-subpath-test-secret-vsp9": Phase="Running", Reason="", readiness=true. Elapsed: 4.012203609s Apr 25 22:14:48.068: INFO: Pod "pod-subpath-test-secret-vsp9": Phase="Running", Reason="", readiness=true. Elapsed: 6.01624712s Apr 25 22:14:50.072: INFO: Pod "pod-subpath-test-secret-vsp9": Phase="Running", Reason="", readiness=true. Elapsed: 8.020510266s Apr 25 22:14:52.076: INFO: Pod "pod-subpath-test-secret-vsp9": Phase="Running", Reason="", readiness=true. Elapsed: 10.024381037s Apr 25 22:14:54.080: INFO: Pod "pod-subpath-test-secret-vsp9": Phase="Running", Reason="", readiness=true. Elapsed: 12.028718815s Apr 25 22:14:56.085: INFO: Pod "pod-subpath-test-secret-vsp9": Phase="Running", Reason="", readiness=true. Elapsed: 14.033210882s Apr 25 22:14:58.089: INFO: Pod "pod-subpath-test-secret-vsp9": Phase="Running", Reason="", readiness=true. Elapsed: 16.037360568s Apr 25 22:15:00.093: INFO: Pod "pod-subpath-test-secret-vsp9": Phase="Running", Reason="", readiness=true. Elapsed: 18.041741907s Apr 25 22:15:02.098: INFO: Pod "pod-subpath-test-secret-vsp9": Phase="Running", Reason="", readiness=true. Elapsed: 20.046227601s Apr 25 22:15:04.102: INFO: Pod "pod-subpath-test-secret-vsp9": Phase="Running", Reason="", readiness=true. Elapsed: 22.050751692s Apr 25 22:15:06.107: INFO: Pod "pod-subpath-test-secret-vsp9": Phase="Running", Reason="", readiness=true. Elapsed: 24.055532908s Apr 25 22:15:08.112: INFO: Pod "pod-subpath-test-secret-vsp9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.060021638s STEP: Saw pod success Apr 25 22:15:08.112: INFO: Pod "pod-subpath-test-secret-vsp9" satisfied condition "success or failure" Apr 25 22:15:08.115: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-secret-vsp9 container test-container-subpath-secret-vsp9: STEP: delete the pod Apr 25 22:15:08.171: INFO: Waiting for pod pod-subpath-test-secret-vsp9 to disappear Apr 25 22:15:08.173: INFO: Pod pod-subpath-test-secret-vsp9 no longer exists STEP: Deleting pod pod-subpath-test-secret-vsp9 Apr 25 22:15:08.174: INFO: Deleting pod "pod-subpath-test-secret-vsp9" in namespace "subpath-4340" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:15:08.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4340" for this suite. • [SLOW TEST:26.278 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":261,"skipped":4230,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:15:08.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode Apr 25 22:15:08.242: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8521" to be "success or failure" Apr 25 22:15:08.262: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 19.213608ms Apr 25 22:15:10.266: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023722487s Apr 25 22:15:12.297: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.054694859s Apr 25 22:15:14.301: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058787874s STEP: Saw pod success Apr 25 22:15:14.301: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 25 22:15:14.305: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 25 22:15:14.336: INFO: Waiting for pod pod-host-path-test to disappear Apr 25 22:15:14.348: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:15:14.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-8521" for this suite. • [SLOW TEST:6.211 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4247,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:15:14.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-4713 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4713 to expose endpoints map[] Apr 25 22:15:14.522: INFO: Get endpoints failed (3.052904ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 25 22:15:15.524: INFO: successfully validated that service endpoint-test2 in namespace services-4713 exposes endpoints map[] (1.005868178s elapsed) STEP: Creating pod pod1 in namespace services-4713 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4713 to expose endpoints map[pod1:[80]] Apr 25 22:15:18.575: INFO: successfully validated that service endpoint-test2 in namespace services-4713 exposes endpoints map[pod1:[80]] (3.044110456s elapsed) STEP: Creating pod pod2 in namespace services-4713 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4713 to expose endpoints map[pod1:[80] pod2:[80]] Apr 25 22:15:22.763: INFO: successfully validated that service endpoint-test2 in namespace services-4713 exposes endpoints map[pod1:[80] pod2:[80]] (4.18363576s elapsed) STEP: Deleting pod pod1 in namespace services-4713 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4713 to expose endpoints map[pod2:[80]] Apr 25 22:15:23.787: INFO: successfully validated that service endpoint-test2 in namespace services-4713 exposes endpoints map[pod2:[80]] (1.02110388s elapsed) STEP: Deleting pod pod2 in namespace services-4713 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4713 to expose endpoints map[] Apr 25 22:15:24.808: INFO: successfully validated that service endpoint-test2 in namespace services-4713 exposes endpoints map[] (1.015120721s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:15:24.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4713" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:10.449 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":263,"skipped":4259,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:15:24.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 22:15:24.986: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-7e786ea9-0a1f-4076-a58e-c1266435a73d" in namespace "security-context-test-4301" to be "success or failure" Apr 25 22:15:25.012: INFO: Pod "busybox-readonly-false-7e786ea9-0a1f-4076-a58e-c1266435a73d": Phase="Pending", Reason="", readiness=false. Elapsed: 26.858065ms Apr 25 22:15:27.016: INFO: Pod "busybox-readonly-false-7e786ea9-0a1f-4076-a58e-c1266435a73d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030777244s Apr 25 22:15:29.021: INFO: Pod "busybox-readonly-false-7e786ea9-0a1f-4076-a58e-c1266435a73d": Phase="Running", Reason="", readiness=true. Elapsed: 4.034928309s Apr 25 22:15:31.025: INFO: Pod "busybox-readonly-false-7e786ea9-0a1f-4076-a58e-c1266435a73d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.039150667s Apr 25 22:15:31.025: INFO: Pod "busybox-readonly-false-7e786ea9-0a1f-4076-a58e-c1266435a73d" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:15:31.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4301" for this suite. • [SLOW TEST:6.190 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4275,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:15:31.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 25 22:15:35.690: INFO: Successfully updated pod "annotationupdate097a07d7-ef2b-4bf2-b0e7-460a43d2e57b" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:15:37.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9285" for this suite. • [SLOW TEST:6.680 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4276,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:15:37.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:15:37.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5533" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":266,"skipped":4285,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:15:37.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0425 22:15:38.892162 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 25 22:15:38.892: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:15:38.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-39" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":267,"skipped":4318,"failed":0} SSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:15:38.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 Apr 25 22:15:39.059: INFO: Creating deployment "test-recreate-deployment" Apr 25 22:15:39.070: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 25 22:15:39.082: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 25 22:15:41.154: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 25 22:15:41.158: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449739, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449739, loc:(*time.Location)(0x78ee080)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449739, loc:(*time.Location)(0x78ee080)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723449739, loc:(*time.Location)(0x78ee080)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 25 22:15:43.164: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 25 22:15:43.176: INFO: Updating deployment test-recreate-deployment Apr 25 22:15:43.176: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 Apr 25 22:15:43.728: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-5616 /apis/apps/v1/namespaces/deployment-5616/deployments/test-recreate-deployment 34871715-12c5-4a2b-9a66-b941f6d17425 11035502 2 2020-04-25 22:15:39 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00308c598 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-04-25 22:15:43 +0000 UTC,LastTransitionTime:2020-04-25 22:15:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-04-25 22:15:43 +0000 UTC,LastTransitionTime:2020-04-25 22:15:39 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 25 22:15:43.731: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-5616 /apis/apps/v1/namespaces/deployment-5616/replicasets/test-recreate-deployment-5f94c574ff 5a0a300f-d64b-474d-b6c7-919353f02dee 11035500 1 2020-04-25 22:15:43 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 34871715-12c5-4a2b-9a66-b941f6d17425 0xc00331d267 0xc00331d268}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00331d2c8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 25 22:15:43.731: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 25 22:15:43.731: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-5616 /apis/apps/v1/namespaces/deployment-5616/replicasets/test-recreate-deployment-799c574856 fcc84e74-e9b8-4cb2-915a-d030f4de405d 11035490 2 2020-04-25 22:15:39 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 34871715-12c5-4a2b-9a66-b941f6d17425 0xc00331d337 0xc00331d338}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00331d3a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 25 22:15:43.734: INFO: Pod "test-recreate-deployment-5f94c574ff-n86sw" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-n86sw test-recreate-deployment-5f94c574ff- deployment-5616 /api/v1/namespaces/deployment-5616/pods/test-recreate-deployment-5f94c574ff-n86sw 11bfbbf0-8fa3-49a0-8371-29795cc84463 11035501 0 2020-04-25 22:15:43 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 5a0a300f-d64b-474d-b6c7-919353f02dee 0xc00331d807 0xc00331d808}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2t76t,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2t76t,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2t76t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 22:15:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 22:15:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 22:15:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-04-25 22:15:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-04-25 22:15:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:15:43.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5616" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":268,"skipped":4322,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:15:43.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info Apr 25 22:15:43.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 25 22:15:44.002: INFO: stderr: "" Apr 25 22:15:44.002: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:15:44.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8215" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":269,"skipped":4327,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:15:44.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod Apr 25 22:15:48.909: INFO: Successfully updated pod "annotationupdate537fb10a-274f-439f-86a3-cb56cba18867" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:15:50.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8331" for this suite. • [SLOW TEST:6.743 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4382,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:15:50.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 25 22:15:51.024: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 25 22:15:56.028: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:15:56.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4309" for this suite. • [SLOW TEST:5.181 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":271,"skipped":4442,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:15:56.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-5767/secret-test-251f67c7-c64c-4698-95eb-50b5036a8167 STEP: Creating a pod to test consume secrets Apr 25 22:15:56.257: INFO: Waiting up to 5m0s for pod "pod-configmaps-71a0e996-c6d6-4ce1-8f60-ddc1fb838454" in namespace "secrets-5767" to be "success or failure" Apr 25 22:15:56.260: INFO: Pod "pod-configmaps-71a0e996-c6d6-4ce1-8f60-ddc1fb838454": Phase="Pending", Reason="", readiness=false. Elapsed: 2.946139ms Apr 25 22:15:58.358: INFO: Pod "pod-configmaps-71a0e996-c6d6-4ce1-8f60-ddc1fb838454": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100418572s Apr 25 22:16:00.362: INFO: Pod "pod-configmaps-71a0e996-c6d6-4ce1-8f60-ddc1fb838454": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104669713s STEP: Saw pod success Apr 25 22:16:00.362: INFO: Pod "pod-configmaps-71a0e996-c6d6-4ce1-8f60-ddc1fb838454" satisfied condition "success or failure" Apr 25 22:16:00.364: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-71a0e996-c6d6-4ce1-8f60-ddc1fb838454 container env-test: STEP: delete the pod Apr 25 22:16:00.448: INFO: Waiting for pod pod-configmaps-71a0e996-c6d6-4ce1-8f60-ddc1fb838454 to disappear Apr 25 22:16:00.555: INFO: Pod pod-configmaps-71a0e996-c6d6-4ce1-8f60-ddc1fb838454 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:16:00.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5767" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4449,"failed":0} SSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:16:00.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine Apr 25 22:16:00.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5700' Apr 25 22:16:00.745: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 25 22:16:00.745: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Apr 25 22:16:00.800: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-xp5qk] Apr 25 22:16:00.800: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-xp5qk" in namespace "kubectl-5700" to be "running and ready" Apr 25 22:16:00.803: INFO: Pod "e2e-test-httpd-rc-xp5qk": Phase="Pending", Reason="", readiness=false. Elapsed: 3.075432ms Apr 25 22:16:02.896: INFO: Pod "e2e-test-httpd-rc-xp5qk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095850512s Apr 25 22:16:04.899: INFO: Pod "e2e-test-httpd-rc-xp5qk": Phase="Running", Reason="", readiness=true. Elapsed: 4.099037457s Apr 25 22:16:04.899: INFO: Pod "e2e-test-httpd-rc-xp5qk" satisfied condition "running and ready" Apr 25 22:16:04.899: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-xp5qk] Apr 25 22:16:04.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-5700' Apr 25 22:16:05.022: INFO: stderr: "" Apr 25 22:16:05.022: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.104. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.104. Set the 'ServerName' directive globally to suppress this message\n[Sat Apr 25 22:16:03.866175 2020] [mpm_event:notice] [pid 1:tid 139737978100584] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Sat Apr 25 22:16:03.866227 2020] [core:notice] [pid 1:tid 139737978100584] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 Apr 25 22:16:05.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5700' Apr 25 22:16:05.134: INFO: stderr: "" Apr 25 22:16:05.134: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:16:05.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5700" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":273,"skipped":4454,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:16:05.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-1477 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 25 22:16:05.221: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 25 22:16:31.338: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.105 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1477 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 22:16:31.338: INFO: >>> kubeConfig: /root/.kube/config I0425 22:16:31.371537 6 log.go:172] (0xc0048988f0) (0xc0018da0a0) Create stream I0425 22:16:31.371578 6 log.go:172] (0xc0048988f0) (0xc0018da0a0) Stream added, broadcasting: 1 I0425 22:16:31.373733 6 log.go:172] (0xc0048988f0) Reply frame received for 1 I0425 22:16:31.373777 6 log.go:172] (0xc0048988f0) (0xc0018da140) Create stream I0425 22:16:31.373801 6 log.go:172] (0xc0048988f0) (0xc0018da140) Stream added, broadcasting: 3 I0425 22:16:31.374756 6 log.go:172] (0xc0048988f0) Reply frame received for 3 I0425 22:16:31.374791 6 log.go:172] (0xc0048988f0) (0xc002723b80) Create stream I0425 22:16:31.374802 6 log.go:172] (0xc0048988f0) (0xc002723b80) Stream added, broadcasting: 5 I0425 22:16:31.375738 6 log.go:172] (0xc0048988f0) Reply frame received for 5 I0425 22:16:32.454004 6 log.go:172] (0xc0048988f0) Data frame received for 5 I0425 22:16:32.454128 6 log.go:172] (0xc002723b80) (5) Data frame handling I0425 22:16:32.454208 6 log.go:172] (0xc0048988f0) Data frame received for 3 I0425 22:16:32.454259 6 log.go:172] (0xc0018da140) (3) Data frame handling I0425 22:16:32.454294 6 log.go:172] (0xc0018da140) (3) Data frame sent I0425 22:16:32.454333 6 log.go:172] (0xc0048988f0) Data frame received for 3 I0425 22:16:32.454378 6 log.go:172] (0xc0018da140) (3) Data frame handling I0425 22:16:32.456660 6 log.go:172] (0xc0048988f0) Data frame received for 1 I0425 22:16:32.456685 6 log.go:172] (0xc0018da0a0) (1) Data frame handling I0425 22:16:32.456698 6 log.go:172] (0xc0018da0a0) (1) Data frame sent I0425 22:16:32.456722 6 log.go:172] (0xc0048988f0) (0xc0018da0a0) Stream removed, broadcasting: 1 I0425 22:16:32.456741 6 log.go:172] (0xc0048988f0) Go away received I0425 22:16:32.456877 6 log.go:172] (0xc0048988f0) (0xc0018da0a0) Stream removed, broadcasting: 1 I0425 22:16:32.456909 6 log.go:172] (0xc0048988f0) (0xc0018da140) Stream removed, broadcasting: 3 I0425 22:16:32.456926 6 log.go:172] (0xc0048988f0) (0xc002723b80) Stream removed, broadcasting: 5 Apr 25 22:16:32.456: INFO: Found all expected endpoints: [netserver-0] Apr 25 22:16:32.460: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.19 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1477 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 25 22:16:32.460: INFO: >>> kubeConfig: /root/.kube/config I0425 22:16:32.492568 6 log.go:172] (0xc002262000) (0xc0022d2320) Create stream I0425 22:16:32.492593 6 log.go:172] (0xc002262000) (0xc0022d2320) Stream added, broadcasting: 1 I0425 22:16:32.494570 6 log.go:172] (0xc002262000) Reply frame received for 1 I0425 22:16:32.494624 6 log.go:172] (0xc002262000) (0xc002723c20) Create stream I0425 22:16:32.494639 6 log.go:172] (0xc002262000) (0xc002723c20) Stream added, broadcasting: 3 I0425 22:16:32.495626 6 log.go:172] (0xc002262000) Reply frame received for 3 I0425 22:16:32.495668 6 log.go:172] (0xc002262000) (0xc002723cc0) Create stream I0425 22:16:32.495686 6 log.go:172] (0xc002262000) (0xc002723cc0) Stream added, broadcasting: 5 I0425 22:16:32.496821 6 log.go:172] (0xc002262000) Reply frame received for 5 I0425 22:16:33.585454 6 log.go:172] (0xc002262000) Data frame received for 3 I0425 22:16:33.585498 6 log.go:172] (0xc002723c20) (3) Data frame handling I0425 22:16:33.585529 6 log.go:172] (0xc002723c20) (3) Data frame sent I0425 22:16:33.585550 6 log.go:172] (0xc002262000) Data frame received for 3 I0425 22:16:33.585562 6 log.go:172] (0xc002723c20) (3) Data frame handling I0425 22:16:33.586024 6 log.go:172] (0xc002262000) Data frame received for 5 I0425 22:16:33.586054 6 log.go:172] (0xc002723cc0) (5) Data frame handling I0425 22:16:33.587784 6 log.go:172] (0xc002262000) Data frame received for 1 I0425 22:16:33.587870 6 log.go:172] (0xc0022d2320) (1) Data frame handling I0425 22:16:33.587916 6 log.go:172] (0xc0022d2320) (1) Data frame sent I0425 22:16:33.587942 6 log.go:172] (0xc002262000) (0xc0022d2320) Stream removed, broadcasting: 1 I0425 22:16:33.587975 6 log.go:172] (0xc002262000) Go away received I0425 22:16:33.588196 6 log.go:172] (0xc002262000) (0xc0022d2320) Stream removed, broadcasting: 1 I0425 22:16:33.588238 6 log.go:172] (0xc002262000) (0xc002723c20) Stream removed, broadcasting: 3 I0425 22:16:33.588262 6 log.go:172] (0xc002262000) (0xc002723cc0) Stream removed, broadcasting: 5 Apr 25 22:16:33.588: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:16:33.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1477" for this suite. • [SLOW TEST:28.440 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4471,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:16:33.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-b5adab1c-e92b-4968-8f2d-e1875bbbc145 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-b5adab1c-e92b-4968-8f2d-e1875bbbc145 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:18:02.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6284" for this suite. • [SLOW TEST:88.566 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4509,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:18:02.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars Apr 25 22:18:02.272: INFO: Waiting up to 5m0s for pod "downward-api-68f53a7f-af17-4425-b165-89164bebc50e" in namespace "downward-api-4111" to be "success or failure" Apr 25 22:18:02.289: INFO: Pod "downward-api-68f53a7f-af17-4425-b165-89164bebc50e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.831777ms Apr 25 22:18:04.293: INFO: Pod "downward-api-68f53a7f-af17-4425-b165-89164bebc50e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021184466s Apr 25 22:18:06.297: INFO: Pod "downward-api-68f53a7f-af17-4425-b165-89164bebc50e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025199698s STEP: Saw pod success Apr 25 22:18:06.297: INFO: Pod "downward-api-68f53a7f-af17-4425-b165-89164bebc50e" satisfied condition "success or failure" Apr 25 22:18:06.300: INFO: Trying to get logs from node jerma-worker2 pod downward-api-68f53a7f-af17-4425-b165-89164bebc50e container dapi-container: STEP: delete the pod Apr 25 22:18:06.343: INFO: Waiting for pod downward-api-68f53a7f-af17-4425-b165-89164bebc50e to disappear Apr 25 22:18:06.359: INFO: Pod downward-api-68f53a7f-af17-4425-b165-89164bebc50e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:18:06.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4111" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4554,"failed":0} SSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:18:06.390: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:18:06.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9297" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":277,"skipped":4557,"failed":0} ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Apr 25 22:18:06.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller Apr 25 22:18:06.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9090' Apr 25 22:18:06.972: INFO: stderr: "" Apr 25 22:18:06.972: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 25 22:18:06.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9090' Apr 25 22:18:07.094: INFO: stderr: "" Apr 25 22:18:07.094: INFO: stdout: "update-demo-nautilus-h9lqr update-demo-nautilus-mr4fb " Apr 25 22:18:07.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h9lqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9090' Apr 25 22:18:07.172: INFO: stderr: "" Apr 25 22:18:07.172: INFO: stdout: "" Apr 25 22:18:07.172: INFO: update-demo-nautilus-h9lqr is created but not running Apr 25 22:18:12.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9090' Apr 25 22:18:12.286: INFO: stderr: "" Apr 25 22:18:12.286: INFO: stdout: "update-demo-nautilus-h9lqr update-demo-nautilus-mr4fb " Apr 25 22:18:12.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h9lqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9090' Apr 25 22:18:12.392: INFO: stderr: "" Apr 25 22:18:12.392: INFO: stdout: "true" Apr 25 22:18:12.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-h9lqr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9090' Apr 25 22:18:12.518: INFO: stderr: "" Apr 25 22:18:12.518: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 25 22:18:12.518: INFO: validating pod update-demo-nautilus-h9lqr Apr 25 22:18:12.522: INFO: got data: { "image": "nautilus.jpg" } Apr 25 22:18:12.522: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 25 22:18:12.522: INFO: update-demo-nautilus-h9lqr is verified up and running Apr 25 22:18:12.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mr4fb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9090' Apr 25 22:18:12.618: INFO: stderr: "" Apr 25 22:18:12.618: INFO: stdout: "true" Apr 25 22:18:12.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mr4fb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9090' Apr 25 22:18:12.730: INFO: stderr: "" Apr 25 22:18:12.730: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 25 22:18:12.730: INFO: validating pod update-demo-nautilus-mr4fb Apr 25 22:18:12.733: INFO: got data: { "image": "nautilus.jpg" } Apr 25 22:18:12.733: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 25 22:18:12.733: INFO: update-demo-nautilus-mr4fb is verified up and running STEP: scaling down the replication controller Apr 25 22:18:12.736: INFO: scanned /root for discovery docs: Apr 25 22:18:12.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-9090' Apr 25 22:18:13.879: INFO: stderr: "" Apr 25 22:18:13.879: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 25 22:18:13.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9090' Apr 25 22:18:13.980: INFO: stderr: "" Apr 25 22:18:13.980: INFO: stdout: "update-demo-nautilus-h9lqr update-demo-nautilus-mr4fb " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 25 22:18:18.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9090' Apr 25 22:18:19.092: INFO: stderr: "" Apr 25 22:18:19.092: INFO: stdout: "update-demo-nautilus-h9lqr update-demo-nautilus-mr4fb " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 25 22:18:24.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9090' Apr 25 22:18:24.202: INFO: stderr: "" Apr 25 22:18:24.202: INFO: stdout: "update-demo-nautilus-mr4fb " Apr 25 22:18:24.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mr4fb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9090' Apr 25 22:18:24.311: INFO: stderr: "" Apr 25 22:18:24.311: INFO: stdout: "true" Apr 25 22:18:24.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mr4fb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9090' Apr 25 22:18:24.404: INFO: stderr: "" Apr 25 22:18:24.404: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 25 22:18:24.404: INFO: validating pod update-demo-nautilus-mr4fb Apr 25 22:18:24.408: INFO: got data: { "image": "nautilus.jpg" } Apr 25 22:18:24.408: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 25 22:18:24.408: INFO: update-demo-nautilus-mr4fb is verified up and running STEP: scaling up the replication controller Apr 25 22:18:24.410: INFO: scanned /root for discovery docs: Apr 25 22:18:24.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-9090' Apr 25 22:18:25.545: INFO: stderr: "" Apr 25 22:18:25.545: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 25 22:18:25.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9090' Apr 25 22:18:25.651: INFO: stderr: "" Apr 25 22:18:25.651: INFO: stdout: "update-demo-nautilus-6r629 update-demo-nautilus-mr4fb " Apr 25 22:18:25.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6r629 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9090' Apr 25 22:18:25.752: INFO: stderr: "" Apr 25 22:18:25.752: INFO: stdout: "" Apr 25 22:18:25.752: INFO: update-demo-nautilus-6r629 is created but not running Apr 25 22:18:30.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9090' Apr 25 22:18:30.870: INFO: stderr: "" Apr 25 22:18:30.870: INFO: stdout: "update-demo-nautilus-6r629 update-demo-nautilus-mr4fb " Apr 25 22:18:30.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6r629 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9090' Apr 25 22:18:30.964: INFO: stderr: "" Apr 25 22:18:30.964: INFO: stdout: "true" Apr 25 22:18:30.964: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6r629 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9090' Apr 25 22:18:31.064: INFO: stderr: "" Apr 25 22:18:31.064: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 25 22:18:31.064: INFO: validating pod update-demo-nautilus-6r629 Apr 25 22:18:31.068: INFO: got data: { "image": "nautilus.jpg" } Apr 25 22:18:31.068: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 25 22:18:31.068: INFO: update-demo-nautilus-6r629 is verified up and running Apr 25 22:18:31.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mr4fb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9090' Apr 25 22:18:31.158: INFO: stderr: "" Apr 25 22:18:31.158: INFO: stdout: "true" Apr 25 22:18:31.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mr4fb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9090' Apr 25 22:18:31.254: INFO: stderr: "" Apr 25 22:18:31.254: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 25 22:18:31.254: INFO: validating pod update-demo-nautilus-mr4fb Apr 25 22:18:31.257: INFO: got data: { "image": "nautilus.jpg" } Apr 25 22:18:31.257: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 25 22:18:31.257: INFO: update-demo-nautilus-mr4fb is verified up and running STEP: using delete to clean up resources Apr 25 22:18:31.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9090' Apr 25 22:18:31.365: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 25 22:18:31.365: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 25 22:18:31.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9090' Apr 25 22:18:31.470: INFO: stderr: "No resources found in kubectl-9090 namespace.\n" Apr 25 22:18:31.470: INFO: stdout: "" Apr 25 22:18:31.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9090 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 25 22:18:31.571: INFO: stderr: "" Apr 25 22:18:31.571: INFO: stdout: "update-demo-nautilus-6r629\nupdate-demo-nautilus-mr4fb\n" Apr 25 22:18:32.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9090' Apr 25 22:18:32.183: INFO: stderr: "No resources found in kubectl-9090 namespace.\n" Apr 25 22:18:32.183: INFO: stdout: "" Apr 25 22:18:32.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9090 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 25 22:18:32.276: INFO: stderr: "" Apr 25 22:18:32.276: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Apr 25 22:18:32.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9090" for this suite. • [SLOW TEST:25.715 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":278,"skipped":4557,"failed":0} SSSSSSSApr 25 22:18:32.283: INFO: Running AfterSuite actions on all nodes Apr 25 22:18:32.283: INFO: Running AfterSuite actions on node 1 Apr 25 22:18:32.283: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4282.992 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS